96 research outputs found

    Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D Camera-Based Algorithm and Deep Learning Synergy

    Get PDF
    With the advent of agriculture 3.0 and 4.0, researchers are increasingly focusing on the development of innovative smart farming and precision agriculture technologies by introducing automation and robotics into the agricultural processes. Autonomous agricultural field machines have been gaining significant attention from farmers and industries to reduce costs, human workload, and required resources. Nevertheless, achieving sufficient autonomous navigation capabilities requires the simultaneous cooperation of different processes; localization, mapping, and path planning are just some of the steps that aim at providing to the machine the right set of skills to operate in semi-structured and unstructured environments. In this context, this study presents a low-cost local motion planner for autonomous navigation in vineyards based only on an RGB-D camera, low range hardware, and a dual layer control algorithm. The first algorithm exploits the disparity map and its depth representation to generate a proportional control for the robotic platform. Concurrently, a second back-up algorithm, based on representations learning and resilient to illumination variations, can take control of the machine in case of a momentaneous failure of the first block. Moreover, due to the double nature of the system, after initial training of the deep learning model with an initial dataset, the strict synergy between the two algorithms opens the possibility of exploiting new automatically labeled data, coming from the field, to extend the existing model knowledge. The machine learning algorithm has been trained and tested, using transfer learning, with acquired images during different field surveys in the North region of Italy and then optimized for on-device inference with model pruning and quantization. Finally, the overall system has been validated with a customized robot platform in the relevant environment

    Robotic Technologies for High-Throughput Plant Phenotyping: Contemporary Reviews and Future Perspectives

    Get PDF
    Phenotyping plants is an essential component of any effort to develop new crop varieties. As plant breeders seek to increase crop productivity and produce more food for the future, the amount of phenotype information they require will also increase. Traditional plant phenotyping relying on manual measurement is laborious, time-consuming, error-prone, and costly. Plant phenotyping robots have emerged as a high-throughput technology to measure morphological, chemical and physiological properties of large number of plants. Several robotic systems have been developed to fulfill different phenotyping missions. In particular, robotic phenotyping has the potential to enable efficient monitoring of changes in plant traits over time in both controlled environments and in the field. The operation of these robots can be challenging as a result of the dynamic nature of plants and the agricultural environments. Here we discuss developments in phenotyping robots, and the challenges which have been overcome and others which remain outstanding. In addition, some perspective applications of the phenotyping robots are also presented. We optimistically anticipate that autonomous and robotic systems will make great leaps forward in the next 10 years to advance the plant phenotyping research into a new era

    Robotic 3D Plant Perception and Leaf Probing with Collision-Free Motion Planning for Automated Indoor Plant Phenotyping

    Get PDF
    Various instrumentation devices for plant physiology study such as chlorophyll fluorimeter and Raman spectrometer require leaf probing with accurate probe positioning and orientation with respect to leaf surface. In this work, we aimed to automate this process with a Kinect V2 sensor, a high-precision 2D laser profilometer, and a 6-axis robotic manipulator in a high-throughput manner. The relatively wide field of view and high resolution of Kinect V2 allowed rapid capture of the full 3D environment in front of the robot. Given the number of plants, the location and size of each plant were estimated by K-means clustering. A real-time collision-free motion planning framework based on Probabilistic Roadmap was adopted to maneuver the robotic manipulator without colliding with the plants. Each plant was scanned from top with the short-range profilometer to obtain a high-precision point cloud where potential leaf clusters were extracted by region growing segmentation. Each leaf segment was further partitioned into small patches by Voxel Cloud Connectivity Segmentation. Only the small patches with low root mean square values of plane fitting were used to compute probing poses. To evaluate probing accuracy, a square surface was scanned at various angles and its centroid was probed perpendicularly with a probing position error of 1.5 mm and a probing angle error of 0.84 degrees on average. Our growth chamber leaf probing experiment showed that the average motion planning time was 0.4 seconds and the average traveled distance of tool center point was 1 meter

    Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution

    Full text link
    The application of autonomous robots in agriculture is gaining increasing popularity thanks to the high impact it may have on food security, sustainability, resource use efficiency, reduction of chemical treatments, and the optimization of human effort and yield. With this vision, the Flourish research project aimed to develop an adaptable robotic solution for precision farming that combines the aerial survey capabilities of small autonomous unmanned aerial vehicles (UAVs) with targeted intervention performed by multi-purpose unmanned ground vehicles (UGVs). This paper presents an overview of the scientific and technological advances and outcomes obtained in the project. We introduce multi-spectral perception algorithms and aerial and ground-based systems developed for monitoring crop density, weed pressure, crop nitrogen nutrition status, and to accurately classify and locate weeds. We then introduce the navigation and mapping systems tailored to our robots in the agricultural environment, as well as the modules for collaborative mapping. We finally present the ground intervention hardware, software solutions, and interfaces we implemented and tested in different field conditions and with different crops. We describe a real use case in which a UAV collaborates with a UGV to monitor the field and to perform selective spraying without human intervention.Comment: Published in IEEE Robotics & Automation Magazine, vol. 28, no. 3, pp. 29-49, Sept. 202

    Viewpoint Planning based on Shape Completion for Fruit Mapping and Reconstruction

    Full text link
    Robotic systems in agriculture do not only enable increasing automation of farming activities but also represent new challenges for robotics due to the unstructured environment and the non-rigid structures of crops. Especially, active perception for fruit mapping and harvesting is a difficult task since occlusions frequently occur and image segmentation provides only limited accuracy on the actual shape of the fruits. In this paper, we present a viewpoint planning approach that explictly uses the shape prediction from collected data to guide the sensor to view as yet unobserved parts of the fruits. We developed a novel pipeline for continuous interaction between prediction and observation to maximize the information gain about sweet pepper fruits. We adapted two different shape prediction approaches, namely parametric superellipsoid fitting and model based non-rigid latent space registration, and integrated them into our Region of Interest (RoI) viewpoint planner. Additionally, we used a new concept of viewpoint dissimilarity to aid the planner to select good viewpoints and for shortening the planning times. Our simulation experiments with a UR5e arm equipped with a Realsense L515 sensor provide a quantitative demonstration of the efficacy of our iterative shape completion based viewpoint planning. In comparative experiments with a state-of-the-art viewpoint planner, we demonstrate improvement not only in the estimation of the fruit sizes, but also in their reconstruction. Finally, we show the viability of our approach for mapping sweet peppers with a real robotic system in a commercial glasshouse.Comment: Agricultural Automation, Viewpoint Planning, Active Perceptio

    MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction

    Full text link
    We propose MAP-NBV, a prediction-guided active algorithm for 3D reconstruction with multi-agent systems. Prediction-based approaches have shown great improvement in active perception tasks by learning the cues about structures in the environment from data. But these methods primarily focus on single-agent systems. We design a next-best-view approach that utilizes geometric measures over the predictions and jointly optimizes the information gain and control effort for efficient collaborative 3D reconstruction of the object. Our method achieves 22.75% improvement over the prediction-based single-agent approach and 15.63% improvement over the non-predictive multi-agent approach. We make our code publicly available through our project website: http://raaslab.org/projects/MAPNBV/Comment: 7 pages, 7 figures, 2 tables. Submitted to MRS 202

    A Hybrid Cable-Driven Robot for Non-Destructive Leafy Plant Monitoring and Mass Estimation using Structure from Motion

    Full text link
    We propose a novel hybrid cable-based robot with manipulator and camera for high-accuracy, medium-throughput plant monitoring in a vertical hydroponic farm and, as an example application, demonstrate non-destructive plant mass estimation. Plant monitoring with high temporal and spatial resolution is important to both farmers and researchers to detect anomalies and develop predictive models for plant growth. The availability of high-quality, off-the-shelf structure-from-motion (SfM) and photogrammetry packages has enabled a vibrant community of roboticists to apply computer vision for non-destructive plant monitoring. While existing approaches tend to focus on either high-throughput (e.g. satellite, unmanned aerial vehicle (UAV), vehicle-mounted, conveyor-belt imagery) or high-accuracy/robustness to occlusions (e.g. turn-table scanner or robot arm), we propose a middle-ground that achieves high accuracy with a medium-throughput, highly automated robot. Our design pairs the workspace scalability of a cable-driven parallel robot (CDPR) with the dexterity of a 4 degree-of-freedom (DoF) robot arm to autonomously image many plants from a variety of viewpoints. We describe our robot design and demonstrate it experimentally by collecting daily photographs of 54 plants from 64 viewpoints each. We show that our approach can produce scientifically useful measurements, operate fully autonomously after initial calibration, and produce better reconstructions and plant property estimates than those of over-canopy methods (e.g. UAV). As example applications, we show that our system can successfully estimate plant mass with a Mean Absolute Error (MAE) of 0.586g and, when used to perform hypothesis testing on the relationship between mass and age, produces p-values comparable to ground-truth data (p=0.0020 and p=0.0016, respectively).Comment: 8 pages (6-content, 2-citations), 10 figures, 4 tables, submitted to ICRA 202
    • …
    corecore