931 research outputs found
AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming
The combination of aerial survey capabilities of Unmanned Aerial Vehicles
with targeted intervention abilities of agricultural Unmanned Ground Vehicles
can significantly improve the effectiveness of robotic systems applied to
precision agriculture. In this context, building and updating a common map of
the field is an essential but challenging task. The maps built using robots of
different types show differences in size, resolution and scale, the associated
geolocation data may be inaccurate and biased, while the repetitiveness of both
visual appearance and geometric structures found within agricultural contexts
render classical map merging techniques ineffective. In this paper we propose
AgriColMap, a novel map registration pipeline that leverages a grid-based
multimodal environment representation which includes a vegetation index map and
a Digital Surface Model. We cast the data association problem between maps
built from UAVs and UGVs as a multimodal, large displacement dense optical flow
estimation. The dominant, coherent flows, selected using a voting scheme, are
used as point-to-point correspondences to infer a preliminary non-rigid
alignment between the maps. A final refinement is then performed, by exploiting
only meaningful parts of the registered maps. We evaluate our system using real
world data for 3 fields with different crop species. The results show that our
method outperforms several state of the art map registration and matching
techniques by a large margin, and has a higher tolerance to large initial
misalignments. We release an implementation of the proposed approach along with
the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201
Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution
[No abstract available
Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution
The application of autonomous robots in agriculture is gaining increasing
popularity thanks to the high impact it may have on food security,
sustainability, resource use efficiency, reduction of chemical treatments, and
the optimization of human effort and yield. With this vision, the Flourish
research project aimed to develop an adaptable robotic solution for precision
farming that combines the aerial survey capabilities of small autonomous
unmanned aerial vehicles (UAVs) with targeted intervention performed by
multi-purpose unmanned ground vehicles (UGVs). This paper presents an overview
of the scientific and technological advances and outcomes obtained in the
project. We introduce multi-spectral perception algorithms and aerial and
ground-based systems developed for monitoring crop density, weed pressure, crop
nitrogen nutrition status, and to accurately classify and locate weeds. We then
introduce the navigation and mapping systems tailored to our robots in the
agricultural environment, as well as the modules for collaborative mapping. We
finally present the ground intervention hardware, software solutions, and
interfaces we implemented and tested in different field conditions and with
different crops. We describe a real use case in which a UAV collaborates with a
UGV to monitor the field and to perform selective spraying without human
intervention.Comment: Published in IEEE Robotics & Automation Magazine, vol. 28, no. 3, pp.
29-49, Sept. 202
Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry
Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a
more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of
meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This
path is already being taken by the recent and fast-developing research in computational fields, however, some
issues related to computationally expensive processes in the integration of multi-source sensing data remain.
Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned
Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope,
many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and
multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant
contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and
hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields
concentrate most applications and are widely studied. Many challenges are currently being overcome by recent
methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image
datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that
are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are
presented.European Commission 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Junta de Andalucia 1381202-GEU
PYC20-RE-005-UJA
IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU
FPU19/0010
Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter
Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists.
In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production.
We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots.
All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi støvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange år. Autonome selvkjørende biler, til tross for å ha vært satt tilbake av sikkerhetshensyn, er udiskutabelt på vei til å bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utføre forskjellige oppgaver, som å inspisere farlige områder og delta i leteoppdrag. Uansett hva sluttbrukeren velger å gjøre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljø gir spesielle utfordringer som repetitive mønstre og storskala miljø med få landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljø, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc.
I utendørs jordbruksmiljø kan en robot bruke Global Navigation Satellite System (GNSS) til å navigere sine omgivelser. Dette begrenser robotens aktiviteter til områder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljøer innendørs. I ett slikt tilfelle vil reseptorer mot det eksterne miljø som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonære detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for å bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fører til overlegne estimeringsresultater, men er ofte suboptimale med hensyn på kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter så er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon.
Vi taklet denne utfordringen med å utforske og selektivt velge en håndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljø. Algoritmen som kombinerer sensorsignalene gir en høy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med å bare bruke de nødvendige sensorene for oppgaven som skal utføres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljø til å takle landbruksmiljø, hvor hellinger, vegetasjon og trær gjør at tradisjonelle metoder mislykkes. Vår metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gårder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbær-polytuneller kan løses ved å bruke en stereo visuel inertiel (VI) og en dybdesensor for å ekstrahere statiske objekter. Dette eliminerer behovet å kartlegge på nytt for å klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til å bevege seg autonomt mellom rader i ett vanskelig miljø med polytuneller i sikksakk-mønstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten å eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter.
Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljø. Alle forsøk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler
TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture
Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality
Exploring the Technical Advances and Limits of Autonomous UAVs for Precise Agriculture in Constrained Environments
In the field of precise agriculture with autonomous unmanned aerial vehicles (UAVs), the utilization of drones holds significant potential to transform crop monitoring, management, and harvesting techniques. However, despite the numerous benefits of UAVs in smart farming, there are still several technical challenges that need to be addressed in order to render their widespread adoption possible, especially in constrained environments. This paper provides a study of the technical aspect and limitations of autonomous UAVs in precise agriculture applications for constrained environments
Remote sensing and on-farm experiments for determining in-season nitrogen rates in winter wheat – Options for implementation, model accuracy and remaining challenges
Optimised nitrogen (N) fertilisation can be used to increase farm profits, to realise the achievement of quality goals for produce, and to reduce environmental risks in the form of leaching and/or volatilisation of N compounds from the fields. This study examined options and challenges for remote sensing-based variable rate supplemental N fertilisation in winter wheat (Triticum aestivum L.). The models were based on data from ten field trials conducted in different regions across Sweden over three years. A two-step approach for modelling optimal N rates, suitable for practical implementation in precision agriculture, was developed and evaluated. The expected accuracies for new sites and years were assessed by leave-one-entire-trial-out cross-validation. In a first step, the average N rate was modelled from site-specific information, including data that can be obtained from on-farm experiments, i.e. N uptake in plots without N fertilisation (zero-plots) and N uptake in plots with non-limiting N supply (max-plots). In the second step, additions or subtractions from this average N rate was modelled based on vegetation indices (VIs) mapped by remote sensing. Mean absolute error of the best prediction was 14 kg N ha−1. In a practical application, however, there will be additional uncertainty from several sources, e.g. uncertainty in the assessment of yield potential. The best mean N rate model was based on geographical region, cultivar, N uptake in zero-plots and yield potential, while the best model of relative N rate within the field used a new multispectral index (d75r6), which was designed to give a standardized measure of the steepness of the red edge of reflectance of a crop canopy spectrum. Several other multispectral VIs also performed well but red-green-blue indices were less useful. We conclude that remote sensing (to capture within-field spatial variation patterns), on-farm experiments (to determine the field mean N rate), and the farmers’ experience and knowledge on local conditions (e.g. to assess the yield potential), is a useful combination of information sources in decision support systems for variable rate application of N. Options and remaining research needs for the setup of such a system are discussed
UAV Cloud Platform for Precision Farming
A new application for Unmanned Aerial Vehicles comes to light daily to solve some
of modern society’s problems. One of the mentioned predicaments is the possibility for
optimization in agricultural processes. Due to this, a new area arose in the last years of the
twentieth century, and it is in constant progression called Precision Farming. Nowadays,
a division of this field growth is relative to Unmanned Aerial Vehicles applications.
Most traditional methods employed by farmers are ineffective and do not aid in the
progression and solution of these issues. However, there are some fields that have the
possibility to enhance many agriculture methods, such fields are Cyber-Physical Systems
and Cloud Computing. Given its capabilities like aerial surveillance and mapping, Cyber-
Physical Systems like Unmanned Aerial Vehicles are being used to monitor vast crops, to
gather insightful data thatwould take a lot more time if being collected by hand. However,
these systems typically lack computing power and storage capacity, meaning that much
of its gathered data cannot be stored and further analyzed locally. That is the obstacle that
Cloud Computing can solve. With the possibility to offload computing power by sending
the collected data to a cloud, it is possible to leverage the enormous computing power
and storage capabilities of remote data-centers to gather and analyze these datasets.
This dissertation proposes an architecture for this use case by leveraging the advantages
of Cloud Computing to aid the obstacles of Unmanned Aerial Vehicles. Moreover,
this dissertation is a collaboration with an on-going Horizon 2020 European project that
deals with precision farming and agriculture enhanced by Cyber-Physical Systems.A cada dia que passa, novas aplicações para Veículos aéreos não tripulados são inventadas,
de forma a resolver alguns dos problemas actuais da sociedade. Um desses problemas, é
a possibilidade de otimização em processos agrículas. Devido a isto, nos últimos anos do
século 20 nasceu uma nova área de investigação intitulada Agricultura de alta precisão.
Hoje em dia, uma secção desta área diz respeito à inovação nas aplicações com recurso a
Veículos aéreos não tripulados.
A maioria dos métodos tradicionais usados por agricultores são ineficientes e não
auxiliam nem a evolução nem a resolução destes problemas. Contudo, existem algumas
áreas científicas que permitem a evoluçao de algumos métodos agrículas, estas áreas são os
Sistemas Ciber-Físicos e a Computação na Nuvem. Dadas as suas capacidades tais como a
vigilância e mapeamento aéreo, certos Sistemas Ciber-Físicos como os Veículos aéreos não
tripulados estão a ser usados para monitorizar vastas culturas de forma a recolher dados
que levariam muito mais tempo caso fossem recolhidos manualmente. No entanto, estes
sistemas geralmente não detêm grandes capacidades de computação e armazenamento, o
que significa que muitos dos dados recolhidos não podem ser armazenados e analisados
localmente. É aí que a Computação na Nuvem é útil, com a possibilidade de enviar estes
dados para uma nuvem, é possível aproveitar o enorme poder de computação e os recursos
de armazenamento dos datacenters remotos para armazenar e analisar estes conjuntos de
dados.
Esta dissertação propõe uma arquitetura para este caso de uso ao fazer uso das vantagens
da Computação na Nuvem de forma a combater os obstáculos dos Veículos aéreos
não tripulados. Além disso, esta dissertação é também uma colaboração com um projecto
Europeu Horizonte 2020 na área da Agricultura de alta precisão com recurso a Veículos
aéreos não tripulados
- …