43 research outputs found

    MinkSORT: A 3D deep feature extractor using sparse convolutions to improve 3D multi-object tracking in greenhouse tomato plants

    Full text link
    The agro-food industry is turning to robots to address the challenge of labour shortage. However, agro-food environments pose difficulties for robots due to high variation and occlusions. In the presence of these challenges, accurate world models, with information about object location, shape, and properties, are crucial for robots to perform tasks accurately. Building such models is challenging due to the complex and unique nature of agro-food environments, and errors in the model can lead to task execution issues. In this paper, we propose MinkSORT, a novel method for generating tracking features using a 3D sparse convolutional network in a deepSORT-like approach to improve the accuracy of world models in agro-food environments. We evaluated our feature extractor network using real-world data collected in a tomato greenhouse, which significantly improved the performance of our baseline model that tracks tomato positions in 3D using a Kalman filter and Mahalanobis distance. Our deep learning feature extractor improved the HOTA from 42.8% to 44.77%, the association accuracy from 32.55% to 35.55%, and the MOTA from 57.63% to 58.81%. We also evaluated different contrastive loss functions for training our deep learning feature extractor and demonstrated that our approach leads to improved performance in terms of three separate precision and recall detection outcomes. Our method improves world model accuracy, enabling robots to perform tasks such as harvesting and plant maintenance with greater efficiency and accuracy, which is essential for meeting the growing demand for food in a sustainable manner

    Development and evaluation of automated localization and reconstruction of all fruits on tomato plants in a greenhouse based on multi-view perception and 3D multi-object tracking

    Full text link
    Accurate representation and localization of relevant objects is important for robots to perform tasks. Building a generic representation that can be used across different environments and tasks is not easy, as the relevant objects vary depending on the environment and the task. Furthermore, another challenge arises in agro-food environments due to their complexity, and high levels of clutter and occlusions. In this paper, we present a method to build generic representations in highly occluded agro-food environments using multi-view perception and 3D multi-object tracking. Our representation is built upon a detection algorithm that generates a partial point cloud for each detected object. The detected objects are then passed to a 3D multi-object tracking algorithm that creates and updates the representation over time. The whole process is performed at a rate of 10 Hz. We evaluated the accuracy of the representation on a real-world agro-food environment, where it was able to successfully represent and locate tomatoes in tomato plants despite a high level of occlusion. We were able to estimate the total count of tomatoes with a maximum error of 5.08% and to track tomatoes with a tracking accuracy up to 71.47%. Additionally, we showed that an evaluation using tracking metrics gives more insight in the errors in localizing and representing the fruits.Comment: Pre-print, article submitted and in review proces

    Improved generalization of a plant-detection model for precision weed control

    No full text
    Lack of generalization in plant-detection models is one of the main challenges preventing the realization of autonomous weed-control systems. This paper investigates the effect of the train and test dataset distribution on the generalization error of a plant-detection model and uses incremental training to mitigate the said error. In this paper, we use the YOLOv3 object detector as plant-detection model. To train the model and test its generalization properties we used a broad dataset, consisting of 25 sub-datasets, sampled from multiple different geographic areas, soil types, cultivation conditions, containing variation in weeds, background vegetation, camera quality and variations in illumination. Using this dataset we evaluated the generalization error of a plant-detection model, assessed the effect of sampling training images from multiple arable fields on the generalization of our plant-detection model, we investigated the relation between the number of training images and the generalization of the plant-detection model and we applied incremental training to mitigate the generalization error of our plant-detection model on new arable fields. It was found that the average generalization error of our plant-detection model was 0.06 mAP. Increasing the number of sub-datasets for training, while keeping the total number of training images constant, increased the variation covered by the training set and improved the generalization of our plant-detection model. Adding more training images sampled from the same datasets increased the generalization further. However, this effect is limited and only holds when the new images cover new variation. Naively adding more images does not prepare the model for specific scenarios outside the training distribution. Using incremental training the model can be adapted to such scenarios and the generalization error can be mitigated. Depending on the discrepancy between the training set and the new field, finetuning on as little as 25 images can already mitigate the generalization error

    Angle estimation between plant parts for grasp optimisation in harvest robots

    No full text
    For many robotic harvesting applications, position and angle between plant parts is required to optimally position the end-effector before attempting to approach, grasp and cut the product. A method for estimating the angle between plant parts, e.g. stem and fruit, is presented to support the optimisation of grasp pose for harvest robots. The hypothesis is that from colour images, this angle in the horizontal plane can be accurately derived under unmodified greenhouse conditions. It was hypothesised that the location of a fruit and stem could be inferred in the image plane from sparse semantic segmentations. The paper focussed on 4 sub-tasks for a sweet-pepper harvesting robot. Each task was evaluated under 3 conditions: laboratory, simplified greenhouse and unmodified greenhouse. The requirements for each task were based on the end-effector design that required a 25° positioning accuracy. In Task I, colour image segmentation for classes back-ground, fruit and stem plus wire was performed, meeting the requirement of an intersection-over-union > 0.58. In Task II, the stem pose was estimated from the segmentations. In Task III, centres of the fruit and stem were estimated from the output of previous tasks. Both centre estimations In Tasks II and III met the requirement of 25 pixel accuracy on average. In Task IV, the centres were used to estimate the angle between the fruit and stem, meeting the accuracy requirement of 25° for 73% of the cases. The work impacted on the harvest performance by increasing its success rate from 14% theoretically to 52% in practice under unmodified conditions.</p

    Erratum to ‘Angle estimation between plant parts for grasp optimisation in harvest robots’

    No full text
    The publisher regrets the error in the previously published highlights. Highlights for this paper should be as follows: • A method for estimating the angle between the plant stem and fruit is presented.• Colour image segmentation of plant parts is performed by deep learning.• Segmentations are used to localize plant part centres.• A geometric model estimates angles between plant parts.• The angles support grasp pose optimisation in a sweet-pepper harvesting robot.The publisher would like to apologise for any inconvenience caused.</p

    Process-based greenhouse climate models : Genealogy, current status, and future directions

    No full text
    CONTEXT: Process-based greenhouse climate models are valuable tools for the analysis and design of greenhouse systems. A growing number of greenhouse models are published in recent years, making it difficult to identify which components are shared across models, which are new developments, and what are the objectives, strengths and weaknesses of each model. OBJECTIVE: We present an overview of the current state of greenhouse modelling by analyzing studies published between 2018 and 2020. This analysis helps identify the key processes considered in process-based greenhouse models, and the common approaches used to model them. Moreover, we outline how greenhouse models differ in terms of their objectives, complexity, accuracy, and transparency. METHODS: We describe a general structure of process-based greenhouse climate models, including a range of common approaches for describing the various model components. We analyze recently published models with respect to this structure, as well as their intended purposes, greenhouse systems they represent, equipment included, and crops considered. We present a model inheritance chart, outlining the origins of contemporary models, and showing which were built on previous works. We compare model validation studies and show the various types of datasets and metrics used for validation. RESULTS AND CONCLUSIONS: The analysis highlights the range of objectives and approaches prevalent in greenhouse modelling, and shows that despite the large variation in model design and complexity, considerable overlap exists. Some possible reasons for the abundance of models include a lack of transparency and code availability; a belief that model development is in itself a valuable research goal; a preference for simple models in control-oriented studies; and a difference in the time scales considered. Approaches to model validation vary considerably, making it difficult to compare models or assess if they serve their intended purposes. We suggest that increased transparency and availability of source code will promote model reuse and extension, and that shared datasets and evaluation benchmarks will facilitate model evaluation and comparison. SIGNIFICANCE: This study highlights several issues that should be considered in greenhouse model selection and development. Developers of new models can use the decomposition provided in order to present their models and facilitate extension and reuse. Developers are encouraged to reflect on and explicitly state their model's range of suitability, complexity, validity, and transparency. Lastly, we highlight several steps that could be taken by the greenhouse modelling community in order to advance the field as a whole

    Angle estimation between plant parts for grasp optimisation in harvest robots

    No full text
    For many robotic harvesting applications, position and angle between plant parts is required to optimally position the end-effector before attempting to approach, grasp and cut the product. A method for estimating the angle between plant parts, e.g. stem and fruit, is presented to support the optimisation of grasp pose for harvest robots. The hypothesis is that from colour images, this angle in the horizontal plane can be accurately derived under unmodified greenhouse conditions. It was hypothesised that the location of a fruit and stem could be inferred in the image plane from sparse semantic segmentations. The paper focussed on 4 sub-tasks for a sweet-pepper harvesting robot. Each task was evaluated under 3 conditions: laboratory, simplified greenhouse and unmodified greenhouse. The requirements for each task were based on the end-effector design that required a 25° positioning accuracy. In Task I, colour image segmentation for classes back-ground, fruit and stem plus wire was performed, meeting the requirement of an intersection-over-union > 0.58. In Task II, the stem pose was estimated from the segmentations. In Task III, centres of the fruit and stem were estimated from the output of previous tasks. Both centre estimations In Tasks II and III met the requirement of 25 pixel accuracy on average. In Task IV, the centres were used to estimate the angle between the fruit and stem, meeting the accuracy requirement of 25° for 73% of the cases. The work impacted on the harvest performance by increasing its success rate from 14% theoretically to 52% in practice under unmodified conditions.</p
    corecore