149 research outputs found
Drones in Vegetable Crops: A Systematic Literature Review
In the context of increasing global population and climate change, modern agriculture must enhance production
efficiency. Vegetables production is crucial for human nutrition and has a significant environmental impact. To
address this challenge, the agricultural sector needs to modernize and utilize advanced technologies such as
drones to increase productivity, improve quality, and reduce resource consumption. These devices, known as
Unmanned Aerial Vehicles (UAV), with their agility and versatility play a crucial role in monitoring and spraying
operations. They significantly contribute to enhancing the efficacy of precision farming.
The aim of this review is to examine the critical role of drones as innovative tools to enhance management and
yield of vegetable crops cultivation. This review was carried out using the Preferred Reporting Items for Systematic
Reviews and Meta-Analysis (PRISMA) framework and involved the analysis of a wide range of research
published from 2018 to 2023. According to the phases of Identification, Screening, and Eligibility, 132 papers
were selected and analysed. These papers were categorized based on the types of drone applications in vegetable
crop production, providing an overview of how these tools fit into the field of Precision Farming. Technological
developments of these tools and data processing methods were then explored, examining the contributions of
Machine and Deep Learning and Artificial Intelligence. Final considerations were presented regarding practical
implementation and future technical and scientific challenges to fully harness the potential of drones in precision
agriculture and vegetable crop production. The review pointed out the significance of drone applications in
vegetable crops and the immense potential of these tools in enhancing cultivation efficiency. Drone utilization
enables the reduction of input quantities such as herbicides, fertilizers, pesticides, and water but also the prevention
of damages through early diagnosis of various stress types. These input savings can yield environmental
benefits, positioning these technologies as potential solutions for the environmental sustainability of vegetable
crops
Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry
Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a
more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of
meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This
path is already being taken by the recent and fast-developing research in computational fields, however, some
issues related to computationally expensive processes in the integration of multi-source sensing data remain.
Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned
Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope,
many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and
multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant
contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and
hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields
concentrate most applications and are widely studied. Many challenges are currently being overcome by recent
methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image
datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that
are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are
presented.European Commission 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Junta de Andalucia 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU
FPU19/0010
A Review on Deep Learning in UAV Remote Sensing
Deep Neural Networks (DNNs) learn representation from data with an impressive
capability, and brought important breakthroughs for processing images,
time-series, natural language, audio, video, and many others. In the remote
sensing field, surveys and literature revisions specifically involving DNNs
algorithms' applications have been conducted in an attempt to summarize the
amount of information produced in its subfields. Recently, Unmanned Aerial
Vehicles (UAV) based applications have dominated aerial sensing research.
However, a literature revision that combines both "deep learning" and "UAV
remote sensing" thematics has not yet been conducted. The motivation for our
work was to present a comprehensive review of the fundamentals of Deep Learning
(DL) applied in UAV-based imagery. We focused mainly on describing
classification and regression techniques used in recent applications with
UAV-acquired data. For that, a total of 232 papers published in international
scientific journal databases was examined. We gathered the published material
and evaluated their characteristics regarding application, sensor, and
technique used. We relate how DL presents promising results and has the
potential for processing tasks associated with UAV-based image data. Lastly, we
project future perspectives, commentating on prominent DL paths to be explored
in the UAV remote sensing field. Our revision consists of a friendly-approach
to introduce, commentate, and summarize the state-of-the-art in UAV-based image
applications with DNNs algorithms in diverse subfields of remote sensing,
grouping it in the environmental, urban, and agricultural contexts.Comment: 38 pages, 10 figure
Comparison of machine learning algorithms for wildland-urban interface fuelbreak planning integrating ALS and UAV-Borne LiDAR data and multispectral images
Producción CientíficaControlling vegetation fuels around human settlements is a crucial strategy for reducing fire severity in forests, buildings and infrastructure, as well as protecting human lives. Each country has its own regulations in this respect, but they all have in common that by reducing fuel load, we in turn reduce the intensity and severity of the fire. The use of Unmanned Aerial Vehicles (UAV)-acquired data combined with other passive and active remote sensing data has the greatest performance to planning Wildland-Urban Interface (WUI) fuelbreak through machine learning algorithms. Nine remote sensing data sources (active and passive) and four supervised classification algorithms (Random Forest, Linear and Radial Support Vector Machine and Artificial Neural Networks) were tested to classify five fuel-area types. We used very high-density Light Detection and Ranging (LiDAR) data acquired by UAV (154 returns·m−2 and ortho-mosaic of 5-cm pixel), multispectral data from the satellites Pleiades-1B and Sentinel-2, and low-density LiDAR data acquired by Airborne Laser Scanning (ALS) (0.5 returns·m−2, ortho-mosaic of 25 cm pixels). Through the Variable Selection Using Random Forest (VSURF) procedure, a pre-selection of final variables was carried out to train the model. The four algorithms were compared, and it was concluded that the differences among them in overall accuracy (OA) on training datasets were negligible. Although the highest accuracy in the training step was obtained in SVML (OA=94.46%) and in testing in ANN (OA=91.91%), Random Forest was considered to be the most reliable algorithm, since it produced more consistent predictions due to the smaller differences between training and testing performance. Using a combination of Sentinel-2 and the two LiDAR data (UAV and ALS), Random Forest obtained an OA of 90.66% in training and of 91.80% in testing datasets. The differences in accuracy between the data sources used are much greater than between algorithms. LiDAR growth metrics calculated using point clouds in different dates and multispectral information from different seasons of the year are the most important variables in the classification. Our results support the essential role of UAVs in fuelbreak planning and management and thus, in the prevention of forest fires.Ministerio de Economía, Industria y Competitividad (DI-16-08446; DI-17-09626; PTQ-16-08411; PTQ- 16-08633)European Commission through the project ‘MySustainableForest’ (H2020-EO-2017; 776045
Review on Active and Passive Remote Sensing Techniques for Road Extraction
Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.Peer reviewe
On the Use of Unmanned Aerial Systems for Environmental Monitoring
Environmental monitoring plays a central role in diagnosing climate and management impacts on natural and agricultural systems; enhancing the understanding of hydrological processes; optimizing the allocation and distribution of water resources; and assessing, forecasting, and even preventing natural disasters. Nowadays, most monitoring and data collection systems are based upon a combination of ground-based measurements, manned airborne sensors, and satellite observations. These data are utilized in describing both small- and large-scale processes, but have spatiotemporal constraints inherent to each respective collection system. Bridging the unique spatial and temporal divides that limit current monitoring platforms is key to improving our understanding of environmental systems. In this context, Unmanned Aerial Systems (UAS) have considerable potential to radically improve environmental monitoring. UAS-mounted sensors offer an extraordinary opportunity to bridge the existing gap between field observations and traditional air- and space-borne remote sensing, by providing high spatial detail over relatively large areas in a cost-effective way and an entirely new capacity for enhanced temporal retrieval. As well as showcasing recent advances in the field, there is also a need to identify and understand the potential limitations of UAS technology. For these platforms to reach their monitoring potential, a wide spectrum of unresolved issues and application-specific challenges require focused community attention. Indeed, to leverage the full potential of UAS-based approaches, sensing technologies, measurement protocols, postprocessing techniques, retrieval algorithms, and evaluation techniques need to be harmonized. The aim of this paper is to provide an overview of the existing research and applications of UAS in natural and agricultural ecosystem monitoring in order to identify future directions, applications, developments, and challengespublishersversionPeer reviewe
Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey
The ongoing amalgamation of UAV and ML techniques is creating a significant
synergy and empowering UAVs with unprecedented intelligence and autonomy. This
survey aims to provide a timely and comprehensive overview of ML techniques
used in UAV operations and communications and identify the potential growth
areas and research gaps. We emphasise the four key components of UAV operations
and communications to which ML can significantly contribute, namely, perception
and feature extraction, feature interpretation and regeneration, trajectory and
mission planning, and aerodynamic control and operation. We classify the latest
popular ML tools based on their applications to the four components and conduct
gap analyses. This survey also takes a step forward by pointing out significant
challenges in the upcoming realm of ML-aided automated UAV operations and
communications. It is revealed that different ML techniques dominate the
applications to the four key modules of UAV operations and communications.
While there is an increasing trend of cross-module designs, little effort has
been devoted to an end-to-end ML framework, from perception and feature
extraction to aerodynamic control and operation. It is also unveiled that the
reliability and trust of ML in UAV operations and applications require
significant attention before full automation of UAVs and potential cooperation
between UAVs and humans come to fruition.Comment: 36 pages, 304 references, 19 Figure
- …