5,844 research outputs found
Unmanned Aerial Systems for Wildland and Forest Fires
Wildfires represent an important natural risk causing economic losses, human
death and important environmental damage. In recent years, we witness an
increase in fire intensity and frequency. Research has been conducted towards
the development of dedicated solutions for wildland and forest fire assistance
and fighting. Systems were proposed for the remote detection and tracking of
fires. These systems have shown improvements in the area of efficient data
collection and fire characterization within small scale environments. However,
wildfires cover large areas making some of the proposed ground-based systems
unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial
Systems (UAS) were proposed. UAS have proven to be useful due to their
maneuverability, allowing for the implementation of remote sensing, allocation
strategies and task planning. They can provide a low-cost alternative for the
prevention, detection and real-time support of firefighting. In this paper we
review previous work related to the use of UAS in wildfires. Onboard sensor
instruments, fire perception algorithms and coordination strategies are
considered. In addition, we present some of the recent frameworks proposing the
use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more
efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at:
https://doi.org/10.3390/drones501001
Vision-model-based Real-time Localization of Unmanned Aerial Vehicle for Autonomous Structure Inspection under GPS-denied Environment
UAVs have been widely used in visual inspections of buildings, bridges and
other structures. In either outdoor autonomous or semi-autonomous flights
missions strong GPS signal is vital for UAV to locate its own positions.
However, strong GPS signal is not always available, and it can degrade or fully
loss underneath large structures or close to power lines, which can cause
serious control issues or even UAV crashes. Such limitations highly restricted
the applications of UAV as a routine inspection tool in various domains. In
this paper a vision-model-based real-time self-positioning method is proposed
to support autonomous aerial inspection without the need of GPS support.
Compared to other localization methods that requires additional onboard
sensors, the proposed method uses a single camera to continuously estimate the
inflight poses of UAV. Each step of the proposed method is discussed in detail,
and its performance is tested through an indoor test case.Comment: 8 pages, 5 figures, submitted to i3ce 201
Predicting growing stock volume of Eucalyptus plantations using 3-D point clouds derived from UAV imagery and ALS data
Estimating forest inventory variables is important in monitoring forest resources and
mitigating climate change. In this respect, forest managers require flexible, non-destructive methods
for estimating volume and biomass. High-resolution and low-cost remote sensing data are increasingly
available to measure three-dimensional (3D) canopy structure and to model forest structural attributes.
The main objective of this study was to evaluate and compare the individual tree volume estimates
derived from high-density point clouds obtained from airborne laser scanning (ALS) and digital
aerial photogrammetry (DAP) in Eucalyptus spp. plantations. Object-based image analysis (OBIA)
techniques were applied for individual tree crown (ITC) delineation. The ITC algorithm applied
correctly detected and delineated 199 trees from ALS-derived data, while 192 trees were correctly
identified using DAP-based point clouds acquired fromUnmannedAerialVehicles(UAV), representing
accuracy levels of respectively 62% and 60%. Addressing volume modelling, non-linear regression
fit based on individual tree height and individual crown area derived from the ITC provided the
following results: Model E ciency (Mef) = 0.43 and 0.46, Root Mean Square Error (RMSE) = 0.030 m3
and 0.026 m3, rRMSE = 20.31% and 19.97%, and an approximately unbiased results (0.025 m3 and
0.0004 m3) using DAP and ALS-based estimations, respectively. No significant di erence was found
between the observed value (field data) and volume estimation from ALS and DAP (p-value from
t-test statistic = 0.99 and 0.98, respectively). The proposed approaches could also be used to estimate
basal area or biomass stocks in Eucalyptus spp. plantationsinfo:eu-repo/semantics/publishedVersio
Mixed marker-based/marker-less visual odometry system for mobile robots
When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test
Transfer Learning-Based Crack Detection by Autonomous UAVs
Unmanned Aerial Vehicles (UAVs) have recently shown great performance
collecting visual data through autonomous exploration and mapping in building
inspection. Yet, the number of studies is limited considering the post
processing of the data and its integration with autonomous UAVs. These will
enable huge steps onward into full automation of building inspection. In this
regard, this work presents a decision making tool for revisiting tasks in
visual building inspection by autonomous UAVs. The tool is an implementation of
fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack
detection. It offers an optional mechanism for task planning of revisiting
pinpoint locations during inspection. It is integrated to a quadrotor UAV
system that can autonomously navigate in GPS-denied environments. The UAV is
equipped with onboard sensors and computers for autonomous localization,
mapping and motion planning. The integrated system is tested through
simulations and real-world experiments. The results show that the system
achieves crack detection and autonomous navigation in GPS-denied environments
for building inspection
- …