8 research outputs found

    Safe Human-Robot Interaction in Agriculture

    Get PDF
    Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application

    RASberry - Robotic and Autonomous Systems for Berry Production

    Get PDF
    The soft fruit industry is facing unprecedented challenges due to its reliance of manual labour. We are presenting a newly launched robotics initiative which will help to address the issues faced by the industry and enable automation of the main processes involved in soft fruit production. The RASberry project (Robotics and Autonomous Systems for Berry Production) aims to develop autonomous fleets of robots for horticultural industry. To achieve this goal, the project will bridge several current technological gaps including the development of a mobile platform suitable for the strawberry fields, software components for fleet management, in-field navigation and mapping, long-term operation, and safe human-robot collaboration. In this paper, we provide a general overview of the project, describe the main system components, highlight interesting challenges from a control point of view and then present three specific applications of the robotic fleets in soft fruit production. The applications demonstrate how robotic fleets can benefit the soft fruit industry by significantly decreasing production costs, addressing labour shortages and being the first step towards fully autonomous robotic systems for agriculture

    Logistic

    Get PDF
    Die Logistik ist entscheidend fĂŒr alle agrarischen Produktionsprozesse, was ihre Querschnittsfunktion erklĂ€rt. Über die Neufassung von GĂŒterkraftverkehrsgesetz und Bundesfernstraßenmautgesetz wurde die Stellung landwirtschaftlicher Transporte neu definiert, wodurch sich besonders fĂŒr Lohnunternehmer die technische Ausrichtung Ă€ndert. Dies markiert sich auch im Trend zu Agrar-LKW, mit Entwicklungen im Bereich bodenschonender Bereifung und agrarischer Aufbausysteme. Die Digitalisierung in der Landwirtschaft zeigt sich in der Logistik mit der Zunahme von Managementsystemen zu Navigation, Regelung, Datenaustausch, Dokumentation und Simulation. Hierbei finden auch KI-Systeme immer mehr Anwendung. Im Bereich der Agrarrobotik werden erste Transportsysteme fĂŒr den Feldeinsatz im Sonderkulturanbau angeboten.Logistics is crucial for all agricultural production processes, which explains their cross-sectional function. The position of agricultural transports has been redefined through the new version of the Road Transport Act and the Federal Highway Toll Act, which changes the technical focus, particularly for contractors. This can also be seen in the trend towards agricultural trucks, with developments in the area of soil-protecting tires and agricultural body systems. The trend towards digitalization in agriculture is evident in logistics with the increase in management systems for navigation, control, data exchange, documentation and simulation. AI systems are also being used more and more here. In the field of agricultural robotics, the first transport systems for field use in special crop cultivation are used

    Responsible Development of Autonomous Robots in Agriculture

    Get PDF
    Despite the potential contributions of autonomous robots to agricultural sustainability, social, legal and ethical issues threaten adoption. We discuss how responsible innovation principles can be embedded into the user-centred design of autonomous robots and identify areas for further empirical research

    An Agricultural Event Prediction Framework towards Anticipatory Scheduling of Robot Fleets: General Concepts and Case Studies

    Get PDF
    Harvesting in soft-fruit farms is labor intensive, time consuming and is severely affected by scarcity of skilled labors. Among several activities during soft-fruit harvesting, human pickers take 20–30% of overall operation time into the logistics activities. Such an unproductive time, for example, can be reduced by optimally deploying a fleet of agricultural robots and schedule them by anticipating the human activity behaviour (state) during harvesting. In this paper, we propose a framework for spatio-temporal prediction of human pickers’ activities while they are picking fruits in agriculture fields. Here we exploit temporal patterns of picking operation and 2D discrete points, called topological nodes, as spatial constraints imposed by the agricultural environment. Both information are used in the prediction framework in combination with a variant of the Hidden Markov Model (HMM) algorithm to create two modules. The proposed methodology is validated with two test cases. In Test Case 1, the first module selects an optimal temporal model called as picking_state_progression model that uses temporal features of a picker state (event) to statistically evaluate an adequate number of intra-states also called sub-states. In Test Case 2, the second module uses the outcome from the optimal temporal model in the subsequent spatial model called node_transition model and performs “spatio-temporal predictions” of the picker’s movement while the picker is in a particular state. The Discrete Event Simulation (DES) framework, a proven agricultural multi-robot logistics model, is used to simulate the different picking operation scenarios with and without our proposed prediction framework and the results are then statistically compared to each other. Our prediction framework can reduce the so-called unproductive logistics time in a fully manual harvesting process by about 80 percent in the overall picking operation. This research also indicates that the different rates of picking operations involve different numbers of sub-states, and these sub-states are associated with different trends considered in spatio-temporal predictions

    Automatic multi-camera hand-eye calibration for robotic workcells

    Get PDF
    Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm.Human-robot collaboration (HRC) is an increasingly successful research field, widely investigated for several industrial tasks. Collaborative robots can physically interact with humans in a shared environment and simultaneously guarantee an high human safety during all the working process. This can be achieved through a vision system equipped by a single or a multi camera system which can provide to the manipulator essential information about the surrounding workspace and human behavior, ensuring the collision avoidance with objects and human operators. However, in order to guarantee human safety and an excellent working system where the robot arm is aware about the surrounding environment and it can monitor operator motions, a reliable Hand-Eye calibration is needed. An additional improvement for a really safe human-robot collaboration scenario can be provided by a multi-camera hand-eye calibration. This process guarantees an improved human safety and give the robot a greater ability for collision avoidance, thanks to the presence of more sensors which ensures a constant and more reliable vision of the robot arm and its whole workspace. This thesis is mainly focused on the development of an automatic multi-camera calibration method for robotic workcells, which guarantees ah high human safety and ensure a really accurate working system. In particular, the proposed method has two main properties. It is automatic, since it exploits the robot arm with a planar target attached on its end-effector to accomplish the image acquisition phase necessary for the calibration, which is generally realized with manual procedures. This approach allows to remove as much as possible the inaccurate human intervention and to speed up the whole calibration process. The second main feature is that our approach enables the calibration of a multi-camera system suitable for robotic workcells that are larger than those commonly considered in the literature. Our multi-camera hand-eye calibration method was tested through several experiments with the Franka Emika Panda robot arm and with different sensors: Microsoft Kinect V2, Intel RealSense depth camera D455 and Intel RealSense LiDAR camera L515, in order to prove its flexibility and to test which are the hardware devices which allow to achieve the highest calibration accuracy. However, really accurate results are generally achieved through our method even in large robotic workcell where cameras are placed at a distance d=3 m from the robot arm, achieving a reprojection error even lower than 1 pixel with respect to other state-of-art methods which can not even guarantee a proper calibration at these distances. Moreover our method is compared against other single- and multi-camera calibration techniques and it was proved that the proposed calibration process achieves highest accuracy with respect to other methods found in literature, which are mainly focused on the calibration between a single camera and the robot arm

    3D segmentation and localization using visual cues in uncontrolled environments

    Get PDF
    3D scene understanding is an important area in robotics, autonomous vehicles, and virtual reality. The goal of scene understanding is to recognize and localize all the objects around the agent. This is done through semantic segmentation and depth estimation. Current approaches focus on improving the robustness to solve each task but fail in making them efficient for real-time usage. This thesis presents four efficient methods for scene understanding that work in real environments. The methods also aim to provide a solution for 2D and 3D data. The first approach presents a pipeline that combines the block matching algorithm for disparity estimation, an encoder-decoder neural network for semantic segmentation, and a refinement step that uses both outputs to complete the regions that were not labelled or did not have any disparity assigned to them. This method provides accurate results in 3D reconstruction and morphology estimation of complex structures like rose bushes. Due to the lack of datasets of rose bushes and their segmentation, we also made three large datasets. Two of them have real roses that were manually labelled, and the third one was created using a scene modeler and 3D rendering software. The last dataset aims to capture diversity, realism and obtain different types of labelling. The second contribution provides a strategy for real-time rose pruning using visual servoing of a robotic arm and our previous approach. Current methods obtain the structure of the plant and plan the cutting trajectory using only a global planner and assume a constant background. Our method works in real environments and uses visual feedback to refine the location of the cutting targets and modify the planned trajectory. The proposed visual servoing allows the robot to reach the cutting points 94% of the time. This is an improvement compared to only using a global planner without visual feedback, which reaches the targets 50% of the time. To the best of our knowledge, this is the first robot able to prune a complete rose bush in a natural environment. Recent deep learning image segmentation and disparity estimation networks provide accurate results. However, most of these methods are computationally expensive, which makes them impractical for real-time tasks. Our third contribution uses multi-task learning to learn the image segmentation and disparity estimation together end-to-end. The experiments show that our network has at most 1/3 of the parameters of the state-of-the-art of each individual task and still provides competitive results. The last contribution explores the area of scene understanding using 3D data. Recent approaches use point-based networks to do point cloud segmentation and find local relations between points using only the latent features provided by the network, omitting the geometric information from the point clouds. Our approach aggregates the geometric information into the network. Given that the geometric and latent features are different, our network also uses a two-headed attention mechanism to do local aggregation at the latent and geometric level. This additional information helps the network to obtain a more accurate semantic segmentation, in real point cloud data, using fewer parameters than current methods. Overall, the method obtains the state-of-the-art segmentation in the real datasets S3DIS with 69.2% and competitive results in the ModelNet40 and ShapeNetPart datasets
    corecore