300 research outputs found
Detection of bodies in maritime rescue operations using Unmanned Aerial Vehicles with multispectral cameras
In this study, we use unmanned aerial vehicles equipped with multispectral cameras to search for bodies in maritime rescue operations. A series of flights were performed in open‐water scenarios in the northwest of Spain, using a certified aquatic rescue dummy in dangerous areas and real people when the weather conditions allowed it. The multispectral images were aligned and used to train a convolutional neural network for body detection. An exhaustive evaluation was performed to assess the best combination of spectral channels for this task. Three approaches based on a MobileNet topology were evaluated, using (a) the full image, (b) a sliding window, and (c) a precise localization method. The first method classifies an input image as containing a body or not, the second uses a sliding window to yield a class for each subimage, and the third uses transposed convolutions returning a binary output in which the body pixels are marked. In all cases, the MobileNet architecture was modified by adding custom layers and preprocessing the input to align the multispectral camera channels. Evaluation shows that the proposed methods yield reliable results, obtaining the best classification performance when combining green, red‐edge, and near‐infrared channels. We conclude that the precise localization approach is the most suitable method, obtaining a similar accuracy as the sliding window but achieving a spatial localization close to 1 m. The presented system is about to be implemented for real maritime rescue operations carried out by Babcock Mission Critical Services Spain.This study was performed in collaboration with BabcockMCS Spain and funded by the Galicia Region Government through the Civil UAVs Initiative program, the Spanish Government’s Ministry of Economy, Industry, and Competitiveness through the RTC‐2014‐1863‐8 and INAER4‐14Y (IDI‐20141234) projects, and the grant number 730897 under the HPC‐EUROPA3 project supported by Horizon 2020
A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges
In recent years, the combination of artificial intelligence (AI) and unmanned
aerial vehicles (UAVs) has brought about advancements in various areas. This
comprehensive analysis explores the changing landscape of AI-powered UAVs and
friendly computing in their applications. It covers emerging trends, futuristic
visions, and the inherent challenges that come with this relationship. The
study examines how AI plays a role in enabling navigation, detecting and
tracking objects, monitoring wildlife, enhancing precision agriculture,
facilitating rescue operations, conducting surveillance activities, and
establishing communication among UAVs using environmentally conscious computing
techniques. By delving into the interaction between AI and UAVs, this analysis
highlights the potential for these technologies to revolutionise industries
such as agriculture, surveillance practices, disaster management strategies,
and more. While envisioning possibilities, it also takes a look at ethical
considerations, safety concerns, regulatory frameworks to be established, and
the responsible deployment of AI-enhanced UAV systems. By consolidating
insights from research endeavours in this field, this review provides an
understanding of the evolving landscape of AI-powered UAVs while setting the
stage for further exploration in this transformative domain
A Review of Deep Learning Methods and Applications for Unmanned Aerial Vehicles
Deep learning is recently showing outstanding results for solving a wide variety of robotic tasks in the areas of perception, planning, localization, and control. Its excellent capabilities for learning representations from the complex data acquired in real environments make it extremely suitable for many kinds of autonomous robotic applications. In parallel, Unmanned Aerial Vehicles (UAVs) are currently being extensively applied for several types of civilian tasks in applications going from security, surveillance, and disaster rescue to parcel delivery or warehouse management. In this paper, a thorough review has been performed on recent reported uses and applications of deep learning for UAVs, including the most relevant developments as well as their performances and limitations. In addition, a detailed explanation of the main deep learning techniques is provided. We conclude with a description of the main challenges for the application of deep learning for UAV-based solutions
Disaster area recognition from aerial images with complex-shape class detection.
This paper presents a convolutional neural network (CNN) model for event detection from Unmanned Aerial Vehicles (UAV) in disaster environments. The model leverages the YOLOv5 network, specifically adapted for aerial images and optimized for detecting Search and Rescue (SAR) related classes for disaster area recognition. These SAR-related classes are person, vehicle, debris, fire, smoke, and flooded areas. Among these, the latter four classes lead to unique challenges due to their lack of discernible edges and/or shapes in aerial imagery, making their accurate detection and performance evaluation metrics particularly intricate. The methodology for the model training involves the adaptation of the pre-trained model for aerial images and its subsequent optimization for SAR scenarios. These stages have been conducted using public datasets, with the required image labeling in the case of SAR-related classes. An analysis of the obtained results demonstrates the model’s performance while discussing the intricacies related to complex-shape classes. The model and the SAR datasets are publicly available.This work has been partially funded by the Spanish Ministerio de
Ciencia, Innovación y Universidades, Gobierno de España, project PID2021-
122944OB-I00.
Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
An Appearance-Based Tracking Algorithm for Aerial Search and Rescue Purposes
The automation of the Wilderness Search and Rescue (WiSAR) task aims for high levels of understanding of various scenery. In addition, working in unfriendly and complex environments may cause a time delay in the operation and consequently put human lives at stake. In order to
address this problem, Unmanned Aerial Vehicles (UAVs), which provide potential support to the
conventional methods, are used. These vehicles are provided with reliable human detection and
tracking algorithms; in order to be able to find and track the bodies of the victims in complex
environments, and a robust control system to maintain safe distances from the detected bodies.
In this paper, a human detection based on the color and depth data captured from onboard sensors
is proposed. Moreover, the proposal of computing data association from the skeleton pose and a
visual appearance measurement allows the tracking of multiple people with invariance to the scale,
translation and rotation of the point of view with respect to the target objects. The system has been
validated with real and simulation experiments, and the obtained results show the ability to track
multiple individuals even after long-term disappearances. Furthermore, the simulations present the
robustness of the implemented reactive control system as a promising tool for assisting the pilot to
perform approaching maneuvers in a safe and smooth manner.This research is supported by Madrid Community project SEGVAUTO 4.0 P2018/EMT-4362)
and by the Spanish Government CICYT projects (TRA2015-63708-R and TRA2016-78886-C3-1-R), and Ministerio
de Educación, Cultura y Deporte para la Formación de Profesorado Universitario (FPU14/02143). Also,
we gratefully acknowledge the support of the NVIDIA Corporation with the donation of the GPUs used for
this research
- …