2,352 research outputs found

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Autonomous Capabilities for Small Unmanned Aerial Systems Conducting Radiological Response: Findings from a High-fidelity Discovery Experiment

    Get PDF
    This article presents a preliminary work domain theory and identifies autonomous vehicle, navigational, and mission capabilities and challenges for small unmanned aerial systems (SUASs) responding to a radiological disaster. Radiological events are representative of applications that involve flying at low altitudes and close proximities to structures. To more formally understand the guidance and control demands, the environment in which the SUAS has to function, and the expected missions, tasks, and strategies to respond to an incident, a discovery experiment was performed in 2013. The experiment placed a radiological source emitting at 10 times background radiation in the simulated collapse of a multistory hospital. Two SUASs, an AirRobot 100B and a Leptron Avenger, were inserted with subject matter experts into the response, providing high operational fidelity. The SUASs were expected by the responders to fly at altitudes between 0.3 and 30 m, and hover at 1.5 m from urban structures. The proximity to a building introduced a decrease in GPS satellite coverage, challenging existing vehicle autonomy. Five new navigational capabilities were identified: scan, obstacle avoidance, contour following, environment-aware return to home, andreturn to highest reading. Furthermore, the data-to-decision process could be improved with autonomous data digestion and visualization capabilities. This article is expected to contribute to a better understanding of autonomy in a SUAS, serve as a requirement document for advanced autonomy, and illustrate how discovery experimentation serves as a design tool for autonomous vehicles

    Adoption of vehicular ad hoc networking protocols by networked robots

    Get PDF
    This paper focuses on the utilization of wireless networking in the robotics domain. Many researchers have already equipped their robots with wireless communication capabilities, stimulated by the observation that multi-robot systems tend to have several advantages over their single-robot counterparts. Typically, this integration of wireless communication is tackled in a quite pragmatic manner, only a few authors presented novel Robotic Ad Hoc Network (RANET) protocols that were designed specifically with robotic use cases in mind. This is in sharp contrast with the domain of vehicular ad hoc networks (VANET). This observation is the starting point of this paper. If the results of previous efforts focusing on VANET protocols could be reused in the RANET domain, this could lead to rapid progress in the field of networked robots. To investigate this possibility, this paper provides a thorough overview of the related work in the domain of robotic and vehicular ad hoc networks. Based on this information, an exhaustive list of requirements is defined for both types. It is concluded that the most significant difference lies in the fact that VANET protocols are oriented towards low throughput messaging, while RANET protocols have to support high throughput media streaming as well. Although not always with equal importance, all other defined requirements are valid for both protocols. This leads to the conclusion that cross-fertilization between them is an appealing approach for future RANET research. To support such developments, this paper concludes with the definition of an appropriate working plan

    Detection of bodies in maritime rescue operations using Unmanned Aerial Vehicles with multispectral cameras

    Get PDF
    In this study, we use unmanned aerial vehicles equipped with multispectral cameras to search for bodies in maritime rescue operations. A series of flights were performed in open‐water scenarios in the northwest of Spain, using a certified aquatic rescue dummy in dangerous areas and real people when the weather conditions allowed it. The multispectral images were aligned and used to train a convolutional neural network for body detection. An exhaustive evaluation was performed to assess the best combination of spectral channels for this task. Three approaches based on a MobileNet topology were evaluated, using (a) the full image, (b) a sliding window, and (c) a precise localization method. The first method classifies an input image as containing a body or not, the second uses a sliding window to yield a class for each subimage, and the third uses transposed convolutions returning a binary output in which the body pixels are marked. In all cases, the MobileNet architecture was modified by adding custom layers and preprocessing the input to align the multispectral camera channels. Evaluation shows that the proposed methods yield reliable results, obtaining the best classification performance when combining green, red‐edge, and near‐infrared channels. We conclude that the precise localization approach is the most suitable method, obtaining a similar accuracy as the sliding window but achieving a spatial localization close to 1 m. The presented system is about to be implemented for real maritime rescue operations carried out by Babcock Mission Critical Services Spain.This study was performed in collaboration with BabcockMCS Spain and funded by the Galicia Region Government through the Civil UAVs Initiative program, the Spanish Government’s Ministry of Economy, Industry, and Competitiveness through the RTC‐2014‐1863‐8 and INAER4‐14Y (IDI‐20141234) projects, and the grant number 730897 under the HPC‐EUROPA3 project supported by Horizon 2020

    Distribution of DDS-cerberus Authenticated Facial Recognition Streams

    Get PDF
    Successful missions in the field often rely upon communication technologies for tactics and coordination. One middleware used in securing these communication channels is Data Distribution Service (DDS) which employs a publish-subscribe model. However, researchers have found several security vulnerabilities in DDS implementations. DDS-Cerberus (DDS-C) is a security layer implemented into DDS to mitigate impersonation attacks using Kerberos authentication and ticketing. Even with the addition of DDS-C, the real-time message sending of DDS also needs to be upheld. This paper extends our previous work to analyze DDS-C’s impact on performance in a use case implementation. The use case covers an artificial intelligence (AI) scenario that connects edge sensors across a commercial network. Specifically, it characterizes how DDS-C performs between unmanned aerial vehicles (UAV), the cloud, and video streams for facial recognition. The experiments send a set number of video frames over the network using DDS to be processed by AI and displayed on a screen. An evaluation of network traffic using DDS-C revealed that it was not statistically significant compared to DDS for the majority of the configuration runs. The results demonstrate that DDS-C provides security benefits without significantly hindering the overall performance

    Chapter Operational Validation of Search and Rescue Robots

    Get PDF
    This chapter describes how the different ICARUS unmanned search and rescue tools have been evaluated and validated using operational benchmarking techniques. Two large‐scale simulated disaster scenarios were organized: a simulated shipwreck and an earthquake response scenario. Next to these simulated response scenarios, where ICARUS tools were deployed in tight interaction with real end users, ICARUS tools also participated to a real relief, embedded in a team of end users for a flood response mission. These validation trials allow us to conclude that the ICARUS tools fulfil the user requirements and goals set up at the beginning of the project

    Operational Validation of Search and Rescue Robots

    Get PDF
    This chapter describes how the different ICARUS unmanned search and rescue tools have been evaluated and validated using operational benchmarking techniques. Two large‐scale simulated disaster scenarios were organized: a simulated shipwreck and an earthquake response scenario. Next to these simulated response scenarios, where ICARUS tools were deployed in tight interaction with real end users, ICARUS tools also participated to a real relief, embedded in a team of end users for a flood response mission. These validation trials allow us to conclude that the ICARUS tools fulfil the user requirements and goals set up at the beginning of the project

    Teleoperating a mobile manipulator and a free-flying camera from a single haptic device

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe paper presents a novel teleoperation system that allows the simultaneous and continuous command of a ground mobile manipulator and a free flying camera, implemented using an UAV, from which the operator can monitor the task execution in real-time. The proposed decoupled position and orientation workspace mapping allows the teleoperation from a single haptic device with bounded workspace of a complex robot with unbounded workspace. When the operator is reaching the position and orientation boundaries of the haptic workspace, linear and angular velocity components are respectively added to the inputs of the mobile manipulator and the flying camera. A user study on a virtual environment has been conducted to evaluate the performance and the workload on the user before and after proper training. Analysis on the data shows that the system complexity is not an obstacle for an efficient performance. This is a first step towards the implementation of a teleoperation system with a real mobile manipulator and a low-cost quadrotor as the free-flying camera.Accepted versio

    Using Unmanned Aerial Vehicles for Wireless Localization in Search and Rescue

    Get PDF
    This thesis presents how unmanned aerial vehicles (UAVs) can successfully assist in search and rescue (SAR) operations using wireless localization. The zone-grid to partition to capture/detect WiFi probe requests follows the concepts found in Search Theory Method. The UAV has attached a sensor, e.g., WiFi sniffer, to capture/detect the WiFi probes from victims or lost people’s smartphones. Applying the Random-Forest based machine learning algorithm, an estimation of the user\u27s location is determined with a 81.8% accuracy. UAV technology has shown limitations in the navigational performance and limited flight time. Procedures to optimize these limitations are presented. Additionally, how the UAV is maneuvered during flight is analyzed, considering different SAR flight patterns and Li-Po battery consumption rates of the UAV. Results show that controlling the UAV by remote-controll detected the most probes, but it is less power efficient compared to control it autonomously
    corecore