1,399 research outputs found

    LiDAR-only based navigation algorithm for an autonomous agricultural robot

    Get PDF
    The purpose of the work presented in this paper is to develop a general and robust approach for autonomous robot navigation inside a crop using LiDAR (Light Detection And Ranging) data. To be as robust as possible, the robot navigation must not need any prior information about the crop (such as the size and width of the rows). The developed approach is based on line extractions from 2D point clouds using a PEARL based method. In this paper, additional filters and refinements of the PEARL algorithm are presented in the context of crop detection. A penalization of outliers, a model elimination step, a new model search and a geometric constraint are proposed to improve the crop detection. The approach has been tested over a simulator and compared with classical PEARL and RANSAC based approaches. It appears that adding those modification improved the crop detection and thus the robot navigation. Those results are presented and discussed in this paper. It can be noticed that even if this paper presents simulated results (to ease the comparison with other algorithms), the approach also has been successfully tested using an actual Oz weeding robot, developed by the French company Naio Technologies

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D Camera-Based Algorithm and Deep Learning Synergy

    Get PDF
    With the advent of agriculture 3.0 and 4.0, researchers are increasingly focusing on the development of innovative smart farming and precision agriculture technologies by introducing automation and robotics into the agricultural processes. Autonomous agricultural field machines have been gaining significant attention from farmers and industries to reduce costs, human workload, and required resources. Nevertheless, achieving sufficient autonomous navigation capabilities requires the simultaneous cooperation of different processes; localization, mapping, and path planning are just some of the steps that aim at providing to the machine the right set of skills to operate in semi-structured and unstructured environments. In this context, this study presents a low-cost local motion planner for autonomous navigation in vineyards based only on an RGB-D camera, low range hardware, and a dual layer control algorithm. The first algorithm exploits the disparity map and its depth representation to generate a proportional control for the robotic platform. Concurrently, a second back-up algorithm, based on representations learning and resilient to illumination variations, can take control of the machine in case of a momentaneous failure of the first block. Moreover, due to the double nature of the system, after initial training of the deep learning model with an initial dataset, the strict synergy between the two algorithms opens the possibility of exploiting new automatically labeled data, coming from the field, to extend the existing model knowledge. The machine learning algorithm has been trained and tested, using transfer learning, with acquired images during different field surveys in the North region of Italy and then optimized for on-device inference with model pruning and quantization. Finally, the overall system has been validated with a customized robot platform in the relevant environment

    Embedded System Design of Robot Control Architectures for Unmanned Agricultural Ground Vehicles

    Get PDF
    Engineering technology has matured to the extent where accompanying methods for unmanned field management is now becoming a technologically achievable and economically viable solution to agricultural tasks that have been traditionally performed by humans or human operated machines. Additionally, the rapidly increasing world population and the daunting burden it places on farmers in regards to the food production and crop yield demands, only makes such advancements in the agriculture industry all the more imperative. Consequently, the sector is beginning to observe a noticeable shift, where there exist a number of scalable infrastructural changes that are in the process of slowly being implemented onto the modular machinery design of agricultural equipment. This work is being pursued in effort to provide firmware descriptions and hardware architectures that integrate cutting edge technology onto the embedded control architectures of agricultural machinery designs to assist in achieving the end goal of complete and reliable unmanned agricultural automation. In this thesis, various types of autonomous control algorithms integrated with obstacle avoidance or guidance schemes, were implemented onto controller area network (CAN) based distributive real-time systems (DRTSs) in form of the two unmanned agricultural ground vehicles (UAGVs). Both vehicles are tailored to different applications in the agriculture domain as they both leverage state-of-the-art sensors and modules to attain the end objective of complete autonomy to allow for the automation of various types of agricultural related tasks. The further development of the embedded system design of these machines called for the developed firmware and hardware to be implemented onto both an event triggered and time triggered CAN bus control architecture as each robot employed its own separate embedded control scheme. For the first UAGV, a multiple GPS waypoint navigation scheme is derived, developed, and evaluated to yield a fully controllable GPS-driven vehicle. Additionally, obstacle detection and avoidance capabilities were also implemented onto the vehicle to serve as a safety layer for the robot control architecture, giving the ground vehicle the ability to reliability detect and navigate around any obstacles that may happen to be in the vicinity of the assigned path. The second UAGV was a smaller robot designed for field navigation applications. For this robot, a fully autonomous sensor based algorithm was proposed and implemented onto the machine. It is demonstrated that the utilization and implementation of laser, LIDAR, and IMU sensors onto a mobile robot platform allowed for the realization of a fully autonomous non-GPS sensor based algorithm to be employed for field navigation. The developed algorithm can serve as a viable solution for the application of microclimate sensing in a field. Advisors: A. John Boye and Santosh Pitl

    Embedded System Design of Robot Control Architectures for Unmanned Agricultural Ground Vehicles

    Get PDF
    Engineering technology has matured to the extent where accompanying methods for unmanned field management is now becoming a technologically achievable and economically viable solution to agricultural tasks that have been traditionally performed by humans or human operated machines. Additionally, the rapidly increasing world population and the daunting burden it places on farmers in regards to the food production and crop yield demands, only makes such advancements in the agriculture industry all the more imperative. Consequently, the sector is beginning to observe a noticeable shift, where there exist a number of scalable infrastructural changes that are in the process of slowly being implemented onto the modular machinery design of agricultural equipment. This work is being pursued in effort to provide firmware descriptions and hardware architectures that integrate cutting edge technology onto the embedded control architectures of agricultural machinery designs to assist in achieving the end goal of complete and reliable unmanned agricultural automation. In this thesis, various types of autonomous control algorithms integrated with obstacle avoidance or guidance schemes, were implemented onto controller area network (CAN) based distributive real-time systems (DRTSs) in form of the two unmanned agricultural ground vehicles (UAGVs). Both vehicles are tailored to different applications in the agriculture domain as they both leverage state-of-the-art sensors and modules to attain the end objective of complete autonomy to allow for the automation of various types of agricultural related tasks. The further development of the embedded system design of these machines called for the developed firmware and hardware to be implemented onto both an event triggered and time triggered CAN bus control architecture as each robot employed its own separate embedded control scheme. For the first UAGV, a multiple GPS waypoint navigation scheme is derived, developed, and evaluated to yield a fully controllable GPS-driven vehicle. Additionally, obstacle detection and avoidance capabilities were also implemented onto the vehicle to serve as a safety layer for the robot control architecture, giving the ground vehicle the ability to reliability detect and navigate around any obstacles that may happen to be in the vicinity of the assigned path. The second UAGV was a smaller robot designed for field navigation applications. For this robot, a fully autonomous sensor based algorithm was proposed and implemented onto the machine. It is demonstrated that the utilization and implementation of laser, LIDAR, and IMU sensors onto a mobile robot platform allowed for the realization of a fully autonomous non-GPS sensor based algorithm to be employed for field navigation. The developed algorithm can serve as a viable solution for the application of microclimate sensing in a field. Advisors: A. John Boye and Santosh Pitl

    Augmented Perception for Agricultural Robots Navigation

    Full text link
    [EN] Producing food in a sustainable way is becoming very challenging today due to the lack of skilled labor, the unaffordable costs of labor when available, and the limited returns for growers as a result of low produce prices demanded by big supermarket chains in contrast to ever-increasing costs of inputs such as fuel, chemicals, seeds, or water. Robotics emerges as a technological advance that can counterweight some of these challenges, mainly in industrialized countries. However, the deployment of autonomous machines in open environments exposed to uncertainty and harsh ambient conditions poses an important defiance to reliability and safety. Consequently, a deep parametrization of the working environment in real time is necessary to achieve autonomous navigation. This article proposes a navigation strategy for guiding a robot along vineyard rows for field monitoring. Given that global positioning cannot be granted permanently in any vineyard, the strategy is based on local perception, and results from fusing three complementary technologies: 3D vision, lidar, and ultrasonics. Several perception-based navigation algorithms were developed between 2015 and 2019. After their comparison in real environments and conditions, results showed that the augmented perception derived from combining these three technologies provides a consistent basis for outlining the intelligent behavior of agricultural robots operating within orchards.This work was supported by the European Union Research and Innovation Programs under Grant N. 737669 and Grant N. 610953. The associate editor coordinating the review of this article and approving it for publication was Dr. Oleg Sergiyenko.Rovira Más, F.; Sáiz Rubio, V.; Cuenca-Cuenca, A. (2021). Augmented Perception for Agricultural Robots Navigation. IEEE Sensors Journal. 21(10):11712-11727. https://doi.org/10.1109/JSEN.2020.3016081S1171211727211

    Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter

    Get PDF
    Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists. In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production. We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots. All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi støvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange år. Autonome selvkjørende biler, til tross for å ha vært satt tilbake av sikkerhetshensyn, er udiskutabelt på vei til å bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utføre forskjellige oppgaver, som å inspisere farlige områder og delta i leteoppdrag. Uansett hva sluttbrukeren velger å gjøre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljø gir spesielle utfordringer som repetitive mønstre og storskala miljø med få landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljø, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc. I utendørs jordbruksmiljø kan en robot bruke Global Navigation Satellite System (GNSS) til å navigere sine omgivelser. Dette begrenser robotens aktiviteter til områder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljøer innendørs. I ett slikt tilfelle vil reseptorer mot det eksterne miljø som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonære detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for å bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fører til overlegne estimeringsresultater, men er ofte suboptimale med hensyn på kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter så er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon. Vi taklet denne utfordringen med å utforske og selektivt velge en håndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljø. Algoritmen som kombinerer sensorsignalene gir en høy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med å bare bruke de nødvendige sensorene for oppgaven som skal utføres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljø til å takle landbruksmiljø, hvor hellinger, vegetasjon og trær gjør at tradisjonelle metoder mislykkes. Vår metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gårder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbær-polytuneller kan løses ved å bruke en stereo visuel inertiel (VI) og en dybdesensor for å ekstrahere statiske objekter. Dette eliminerer behovet å kartlegge på nytt for å klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til å bevege seg autonomt mellom rader i ett vanskelig miljø med polytuneller i sikksakk-mønstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten å eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter. Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljø. Alle forsøk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future
    corecore