10 research outputs found

    Computational Contributions to the Automation of Agriculture

    Get PDF
    The purpose of this paper is to explore ways that computational advancements have enabled the complete automation of agriculture from start to finish. With a major need for agricultural advancements because of food and water shortages, some farmers have begun creating their own solutions to these problems. Primarily explored in this paper, however, are current research topics in the automation of agriculture. Digital agriculture is surveyed, focusing on ways that data collection can be beneficial. Additionally, self-driving technology is explored with emphasis on farming applications. Machine vision technology is also detailed, with specific application to weed management and harvesting of crops. Finally, the effects of automating agriculture are briefly considered, including labor, the environment, and direct effects on farmers

    An Evaluation of Three Different Infield Navigation Algorithms

    Get PDF
    In this chapter, we present and evaluate three different infield navigation algorithms, based on the readings from a LIDAR sensor. All three algorithms are tested on a small field robot and used to autonomously drive the robot between the two adjacent rows of maze plants. The first algorithm is the simplest one and just takes distance readings from the left and right side. If robot is not in the center of the mid-row space, it adjusts its course by turning the robot in the right direction accordingly. The second approach groups the left and right readings into two vertical lines by using least-square fit approach. According to the calculated distance and orientation to both lines, it adjusts the course of the robot. The third approach tries to fit an optimal triangle between the robot and the plants, revealing the most optimal one. Based on its shape, the course of the robot is adjusted. All three algorithms are tested in a simulated (ROS stage) and then in an outdoor (maze test field) environment comparing the optimal line with the actual calculated position of the robot. The tests prove that all three approaches work with an error of 0.041聽卤聽0.034聽m for the first algorithm, 0.07聽卤聽0.059聽m for the second, and 0.078聽卤聽0.055聽m error for the third

    Control de velocidad traslacional y orientaci贸n de un robot dedicado a agricultura de precisi贸n

    Get PDF
    Various agricultural tasks can be conducive to fatigue or being triggers of diseases in farmers. For assistance of these tasks, robotics has been used as a possibility to minimize the aforementioned drawbacks. Considering the previous, the Universidad Militar Nueva Granada designed and built a robot called CERES, which performs tasks of weed removal, fumigation, among others. To ensure that CERES can move in crops following a desired trajectory, displacement of robot with a translational speed and specific orientation must be ensured, which is guaranteed using controllers. This article shows the design and implementation of PID controllers, for the tracking of linear speed and orientation, ensuring to meet a settling time when error tends to zero, using SSV (Steady State Value) criteria, and control signals admissible by the robot鈥檚 Hardware.Diversas labores agr铆colas pueden ser conducentes a fatiga o desencadenantes de enfermedades en el agricultor. Para la asistencia de estas tareas, se ha empleado la rob贸tica como una posibilidad para minimizar los inconvenientes mencionados. Considerando lo anterior, la Universidad Militar Nueva Granada dise帽贸 y construy贸 un robot llamado CERES, el cual realiza labores de remoci贸n de maleza, fumigaci贸n, entre otros. Para que CERES pueda movilizarse en los cultivos siguiendo una trayectoria deseada, se debe asegurar el desplazamiento del robot con una velocidad traslacional y orientaci贸n espec铆fica, la cual se garantiza mediante el uso de controladores. El presente art铆culo muestra el dise帽o e implementaci贸n de controladores PID, para el seguimiento de la velocidad lineal y orientaci贸n, asegurando cumplir los tiempos de estabilizaci贸n cuando el error tiende a cero, usando el criterio de SSV (Steady State Value), y se帽ales de control admisibles por el Hardware del robot.Diversas labores agr铆colas pueden ser conducentes a fatiga o desencadenantes de enfermedades en el agricultor. Para la asistencia de estas tareas, se ha empleado la rob贸tica como una posibilidad para minimizar los inconvenientes mencionados. Considerando lo anterior, la Universidad Militar Nueva Granada dise帽贸 y construy贸 un robot llamado CERES, el cual realiza labores de remoci贸n de maleza, fumigaci贸n, entre otros. Para que CERES pueda movilizarse en los cultivos siguiendo una trayectoria deseada, se debe asegurar el desplazamiento del robot con una velocidad traslacional y orientaci贸n espec铆fica, la cual se garantiza mediante el uso de controladores. El presente art铆culo muestra el dise帽o e implementaci贸n de controladores PID, para el seguimiento de la velocidad lineal y orientaci贸n, asegurando cumplir los tiempos de estabilizaci贸n cuando el error tiende a cero, usando el criterio de SSV (Steady State Value), y se帽ales de control admisibles por el Hardware del聽 robot.聽Various agricultural tasks can be conducive to fatigue or being triggers of diseases in farmers. For assistance of these tasks, robotics has been used as a possibility to minimize the aforementioned drawbacks. Considering the previous, the Universidad Militar Nueva Granada designed and built a robot called CERES, which performs tasks of weed removal, fumigation, among others. To ensure that CERES can move in crops following a desired trajectory, displacement of robot with a translational speed and specific orientation must be ensured, which is guaranteed using controllers. This article shows the design and implementation of PID controllers, for the tracking of linear speed and orientation, ensuring to meet a settling time when error tends to zero, using SSV (Steady State Value) criteria, and control signals admissible by the robot鈥檚 Hardware.

    Intelligent vision-based navigation system for mobile robot: A technological review

    Get PDF
    Vision system is gradually becoming more important. As computing technology advances, it has been widely utilized in many industrial and service sectors. One of the critical applications for vision system is to navigate mobile robot safely. In order to do so, several technological elements are required. This article focuses on reviewing recent researches conducted on the intelligent vision-based navigation system for the mobile robot. These include the utilization of mobile robot in various sectors such as manufacturing, warehouse, agriculture, outdoor navigation and other service sectors. Multiple intelligent algorithms used in developing robot vision system were also reviewed

    Multi-Modal Detection and Mapping of Static and Dynamic Obstacles in Agriculture for Process Evaluation

    Get PDF
    Korthals T, Kragh M, Christiansen P, Karstoft H, J酶rgensen RN, R眉ckert U. Multi-Modal Detection and Mapping of Static and Dynamic Obstacles in Agriculture for Process Evaluation. Frontiers in Robotics and AI. 2018;5: 26.Today, agricultural vehicles are available that can automatically perform tasks such as weed detection and spraying, mowing, and sowing while being steered automatically. However, for such systems to be fully autonomous and self-driven, not only their specific agricultural tasks must be automated. An accurate and robust perception system automatically detecting and avoiding all obstacles must also be realized to ensure safety of humans, animals, and other surroundings. In this paper, we present a multi-modal obstacle and environment detection and recognition approach for process evaluation in agricultural fields. The proposed pipeline detects and maps static and dynamic obstacles globally, while providing process-relevant information along the traversed trajectory. Detection algorithms are introduced for a variety of sensor technologies, including range sensors (lidar and radar) and cameras (stereo and thermal). Detection information is mapped globally into semantical occupancy grid maps and fused across all sensors with late fusion, resulting in accurate traversability assessment and semantical mapping of process-relevant categories (e.g., crop, ground, and obstacles). Finally, a decoding step uses a Hidden Markov model to extract relevant process-specific parameters along the trajectory of the vehicle, thus informing a potential control system of unexpected structures in the planned path. The method is evaluated on a public dataset for multi-modal obstacle detection in agricultural fields. Results show that a combination of multiple sensor modalities increases detection performance and that different fusion strategies must be applied between algorithms detecting similar and dissimilar classes

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Active Object Classification from 3D Range Data with Mobile Robots

    Get PDF
    This thesis addresses the problem of how to improve the acquisition of 3D range data with a mobile robot for the task of object classification. Establishing the identities of objects in unknown environments is fundamental for robotic systems and helps enable many abilities such as grasping, manipulation, or semantic mapping. Objects are recognised by data obtained from sensor observations, however, data is highly dependent on viewpoint; the variation in position and orientation of the sensor relative to an object can result in large variation in the perception quality. Additionally, cluttered environments present a further challenge because key data may be missing. These issues are not always solved by traditional passive systems where data are collected from a fixed navigation process then fed into a perception pipeline. This thesis considers an active approach to data collection by deciding where is most appropriate to make observations for the perception task. The core contributions of this thesis are a non-myopic planning strategy to collect data efficiently under resource constraints, and supporting viewpoint prediction and evaluation methods for object classification. Our approach to planning uses Monte Carlo methods coupled with a classifier based on non-parametric Bayesian regression. We present a novel anytime and non-myopic planning algorithm, Monte Carlo active perception, that extends Monte Carlo tree search to partially observable environments and the active perception problem. This is combined with a particle-based estimation process and a learned observation likelihood model that uses Gaussian process regression. To support planning, we present 3D point cloud prediction algorithms and utility functions that measure the quality of viewpoints by their discriminatory ability and effectiveness under occlusion. The utility of viewpoints is quantified by information-theoretic metrics, such as mutual information, and an alternative utility function that exploits learned data is developed for special cases. The algorithms in this thesis are demonstrated in a variety of scenarios. We extensively test our online planning and classification methods in simulation as well as with indoor and outdoor datasets. Furthermore, we perform hardware experiments with different mobile platforms equipped with different types of sensors. Most significantly, our hardware experiments with an outdoor robot are to our knowledge the first demonstrations of online active perception in a real outdoor environment. Active perception has broad significance in many applications. This thesis emphasises the advantages of an active approach to object classification and presents its assimilation with a wide range of robotic systems, sensors, and perception algorithms. By demonstration of performance enhancements and diversity, our hope is that the concept of considering perception and planning in an integrated manner will be of benefit in improving current systems that rely on passive data collection
    corecore