11 research outputs found

    Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach

    Full text link
    We explore the probabilistic foundations of shared control in complex dynamic environments. In order to do this, we formulate shared control as a random process and describe the joint distribution that governs its behavior. For tractability, we model the relationships between the operator, autonomy, and crowd as an undirected graphical model. Further, we introduce an interaction function between the operator and the robot, that we call "agreeability"; in combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend a cooperative collision avoidance autonomy to shared control. We therefore quantify the notion of simultaneously optimizing over agreeability (between the operator and autonomy), and safety and efficiency in crowded environments. We show that for a particular form of interaction function between the autonomy and the operator, linear blending is recovered exactly. Additionally, to recover linear blending, unimodal restrictions must be placed on the models describing the operator and the autonomy. In turn, these restrictions raise questions about the flexibility and applicability of the linear blending framework. Additionally, we present an extension of linear blending called "operator biased linear trajectory blending" (which formalizes some recent approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that not only is this also a restrictive special case of our probabilistic approach, but more importantly, is statistically unsound, and thus, mathematically, unsuitable for implementation. Instead, we suggest a statistically principled approach that guarantees data is used in a consistent manner, and show how this alternative approach converges to the full probabilistic framework. We conclude by proving that, in general, linear blending is suboptimal with respect to the joint metric of agreeability, safety, and efficiency

    Autonomous Capabilities for Small Unmanned Aerial Systems Conducting Radiological Response: Findings from a High-fidelity Discovery Experiment

    Get PDF
    This article presents a preliminary work domain theory and identifies autonomous vehicle, navigational, and mission capabilities and challenges for small unmanned aerial systems (SUASs) responding to a radiological disaster. Radiological events are representative of applications that involve flying at low altitudes and close proximities to structures. To more formally understand the guidance and control demands, the environment in which the SUAS has to function, and the expected missions, tasks, and strategies to respond to an incident, a discovery experiment was performed in 2013. The experiment placed a radiological source emitting at 10 times background radiation in the simulated collapse of a multistory hospital. Two SUASs, an AirRobot 100B and a Leptron Avenger, were inserted with subject matter experts into the response, providing high operational fidelity. The SUASs were expected by the responders to fly at altitudes between 0.3 and 30 m, and hover at 1.5 m from urban structures. The proximity to a building introduced a decrease in GPS satellite coverage, challenging existing vehicle autonomy. Five new navigational capabilities were identified: scan, obstacle avoidance, contour following, environment-aware return to home, andreturn to highest reading. Furthermore, the data-to-decision process could be improved with autonomous data digestion and visualization capabilities. This article is expected to contribute to a better understanding of autonomy in a SUAS, serve as a requirement document for advanced autonomy, and illustrate how discovery experimentation serves as a design tool for autonomous vehicles

    Semi-autonomous exploration of multi-floor buildings with a legged robot

    Get PDF
    This paper presents preliminary results of a semi-autonomous building exploration behavior using the hexapedal robot RHex. Stairwells are used in virtually all multi-floor buildings, and so in order for a mobile robot to effectively explore, map, clear, monitor, or patrol such buildings it must be able to ascend and descend stairwells. However most conventional mobile robots based on a wheeled platform are unable to traverse stairwells, motivating use of the more mobile legged machine. This semi-autonomous behavior uses a human driver to provide steering input to the robot, as would be the case in, e.g., a tele-operated building exploration mission. The gait selection and transitions between the walking and stair climbing gaits are entirely autonomous. This implementation uses an RGBD camera for stair acquisition, which offers several advantages over a previously documented detector based on a laser range finder, including significantly reduced acquisition time. The sensor package used here also allows for considerable expansion of this behavior. For example, complete automation of the building exploration task driven by a mapping algorithm and higher level planner is presently under development. For more information: Kod*la

    Use of a Small Unmanned Aerial System for the SR-530 Mudslide Incident near Oso, Washington

    Get PDF
    The Center for Robot-Assisted Search and Rescue deployed three commercially available small unmanned aerial systems (SUASs)—an AirRobot AR100B quadrotor, an Insitu Scan Eagle, and a PrecisionHawk Lancaster—to the 2014 SR-530 Washington State mudslides. The purpose of the flights was to allow geologists and hydrologists to assess the eminent risk of loss of life to responders from further slides and flooding, as well as to gain a more comprehensive understanding of the event. The AirRobot AR100B in conjunction with PrecisionHawk postprocessing software created two-dimensional (2D) and 3D reconstructions of the inaccessible “moonscape” region of the slide and provided engineers with a real-time remote presence assessment of river mitigation activities. The AirRobot was able to cover 30–40 acres from an altitude of 42 m (140 ft) in 48 min of flight time and generate interactive 3D reconstructions in 3 h on a laptop in the field. The deployment is the 17th known use of SUAS for disasters, and it illustrates the evolution of SUASs from tactical data collection platforms to strategic data-to-decision systems. It was the first known instance in the United States in which an airspace deconfliction plan allowed a UAS to operate with manned vehicles in the same airspace during a disaster. It also describes how public concerns over SUAS safety and privacy led to the cancellation of initial flights. The deployment provides lessons on operational considerations imposed by the terrain, trees, power lines, and accessibility, and a safe human:robot ratio. The article identifies open research questions in computer vision, mission planning, and data archiving, curation, and mining

    Aerial Remote Sensing in Agriculture: A Practical Approach to Area Coverage and Path Planning for Fleets of Mini Aerial Robots

    Get PDF
    In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions

    Operating at a Distance-How a Teleoperated Surgical Robot Reconfigures Teamwork in the Operating Room

    Get PDF
    This paper investigates how a teleoperated surgical robot reconfigures teamwork in the operating room by spatially redistributing team members. We report on findings from two years of fieldwork at two hospitals, including interviews and video data. We find that while in non-robotic cases team members huddle together, physically touching, introduction of a surgical robot increases physical and sensory distance between team members. This spatial rearrangement has implications for both cognitive and affective dimensions of collaborative surgical work. Cognitive distance is increased, necessitating new efforts to maintain situation awareness and common ground. Moreover, affective distance is introduced, decreasing sensitivity to shared and non-shared affective states and leading to new practices aimed at restoring affective connection within the team. We describe new forms of physical, cognitive, and affective distance associated with teleoperated robotic surgery, and the effects these have on power distribution, practice, and collaborative experience within the surgical team

    Semi-autonomous exploration of multi-floor buildings with a legged robot

    Full text link

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    Coactive Design: Designing Support for Interdependence in Joint Activity

    Full text link

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance
    corecore