2,265 research outputs found

    Command and Control Systems for Search and Rescue Robots

    Get PDF
    The novel application of unmanned systems in the domain of humanitarian Search and Rescue (SAR) operations has created a need to develop specific multi-Robot Command and Control (RC2) systems. This societal application of robotics requires human-robot interfaces for controlling a large fleet of heterogeneous robots deployed in multiple domains of operation (ground, aerial and marine). This chapter provides an overview of the Command, Control and Intelligence (C2I) system developed within the scope of Integrated Components for Assisted Rescue and Unmanned Search operations (ICARUS). The life cycle of the system begins with a description of use cases and the deployment scenarios in collaboration with SAR teams as end-users. This is followed by an illustration of the system design and architecture, core technologies used in implementing the C2I, iterative integration phases with field deployments for evaluating and improving the system. The main subcomponents consist of a central Mission Planning and Coordination System (MPCS), field Robot Command and Control (RC2) subsystems with a portable force-feedback exoskeleton interface for robot arm tele-manipulation and field mobile devices. The distribution of these C2I subsystems with their communication links for unmanned SAR operations is described in detail. Field demonstrations of the C2I system with SAR personnel assisted by unmanned systems provide an outlook for implementing such systems into mainstream SAR operations in the future

    RIDE: A Mixed-Mode Control Interface for Mobile Robot Teams

    Get PDF
    There is a growing need for robot control interfaces that allow a single user to effectively control a large number of mostly-autonomous robots. The challenges in controlling such a collection of robots are very similar to the challenges of controlling characters in some genres of video games. In this paper, we argue that interfaces based on elements from computer video games are effective tools for the control of large robot teams. We present RIDE, the Robot Interactive Display Environment, an example of such an interface, and give the results of initial user studies with the interface, which lend support to our claim

    Attention Allocation for Human Multi-Robot Control: Cognitive Analysis based on Behavior Data and Hidden States

    Get PDF
    Human multi-robot interaction exploits both the human operator’s high-level decision-making skills and the robotic agents’ vigorous computing and motion abilities. While controlling multi-robot teams, an operator’s attention must constantly shift between individual robots to maintain sufficient situation awareness. To conserve an operator’s attentional resources, a robot with self reflect capability on its abnormal status can help an operator focus her attention on emergent tasks rather than unneeded routine checks. With the proposing self-reflect aids, the human-robot interaction becomes a queuing framework, where the robots act as the clients to request for interaction and an operator acts as the server to respond these job requests. This paper examined two types of queuing schemes, the self-paced Open-queue identifying all robots’ normal/abnormal conditions, whereas the forced-paced shortest-job-first (SJF) queue showing a single robot’s request at one time by following the SJF approach. As a robot may miscarry its experienced failures in various situations, the effects of imperfect automation were also investigated in this paper. The results suggest that the SJF attentional scheduling approach can provide stable performance in both primary (locate potential targets) and secondary (resolve robots’ failures) tasks, regardless of the system’s reliability levels. However, the conventional results (e.g., number of targets marked) only present little information about users’ underlying cognitive strategies and may fail to reflect the user’s true intent. As understanding users’ intentions is critical to providing appropriate cognitive aids to enhance task performance, a Hidden Markov Model (HMM) is used to examine operators’ underlying cognitive intent and identify the unobservable cognitive states. The HMM results demonstrate fundamental differences among the queuing mechanisms and reliability conditions. The findings suggest that HMM can be helpful in investigating the use of human cognitive resources under multitasking environments

    Agricultural Swarm Robotics with Distributed Sensing

    Get PDF
    To prove a symbiotic relationship of robotics and distributed sensing, the team designed and built a set of sensor nodes to work in tandem with a mobile robot, mesh network, and server database. This relationship concept was proven by creating and implementing this platform to log climate data for a field of crops

    Teaching systems engineering to undergraduates; Experiences and considerations

    Get PDF
    Undergraduates need a teaching style that fits their limited experience. Especially in systems engineering this is an issue, since systems engineering connects to so many different stakeholders with so many different concerns while the students have experienced only thus far only a few of these concerns and met only few stakeholders. Students need to become aware of the inherent ambiguities, uncertainties, and unknowns in the systems world, in contrast to the focused world of mono-disciplinary engineering. There is a difference between the more traditional engineering disciplines (mechanical, electrical, etc.) and the upcoming and broader disciplines like industrial design engineering and systems engineering itself

    Autonomous Capabilities for Small Unmanned Aerial Systems Conducting Radiological Response: Findings from a High-fidelity Discovery Experiment

    Get PDF
    This article presents a preliminary work domain theory and identifies autonomous vehicle, navigational, and mission capabilities and challenges for small unmanned aerial systems (SUASs) responding to a radiological disaster. Radiological events are representative of applications that involve flying at low altitudes and close proximities to structures. To more formally understand the guidance and control demands, the environment in which the SUAS has to function, and the expected missions, tasks, and strategies to respond to an incident, a discovery experiment was performed in 2013. The experiment placed a radiological source emitting at 10 times background radiation in the simulated collapse of a multistory hospital. Two SUASs, an AirRobot 100B and a Leptron Avenger, were inserted with subject matter experts into the response, providing high operational fidelity. The SUASs were expected by the responders to fly at altitudes between 0.3 and 30 m, and hover at 1.5 m from urban structures. The proximity to a building introduced a decrease in GPS satellite coverage, challenging existing vehicle autonomy. Five new navigational capabilities were identified: scan, obstacle avoidance, contour following, environment-aware return to home, andreturn to highest reading. Furthermore, the data-to-decision process could be improved with autonomous data digestion and visualization capabilities. This article is expected to contribute to a better understanding of autonomy in a SUAS, serve as a requirement document for advanced autonomy, and illustrate how discovery experimentation serves as a design tool for autonomous vehicles

    Deep Learning, transparency and trust in Human Robot Teamwork

    Get PDF
    For Autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system (i.e., the system should be transparent). Robotics presents unique programming difficulties in that systems need to map from complicated sensor inputs such as camera feeds and laser scans to outputs such as joint angles and velocities. Advances in Deep Neural Networks are now making it possible to replace laborious handcrafted features and control code by learning control policies directly from high dimensional sensor inputs. Because Atari games, where these capabilities were first demonstrated, replicate the robotics problem they are ideal for investigating how humans might come to understand and interact with agents who have not been explicitly programmed. We present computational and human results for making DRLN more transparent using object saliency visualizations of internal states and test the effectiveness of expressing saliency through teleological verbal explanations
    • 

    corecore