20 research outputs found

    Single-shot convolution neural networks for real-time fruit detection within the tree

    Get PDF
    Image/video processing for fruit detection in the tree using hard-coded feature extraction algorithms has shown high accuracy on fruit detection during recent years. While accurate, these approaches even with high-end hardware are still computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks architecture based on single-stage detectors. Using deep-learning techniques eliminates the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This architecture takes the input image and divides into AxA grid, where A is a configurable hyper-parameter that defines the fineness of the grid. To each grid cell an image detection and localization algorithm is applied. Each of those cells is responsible to predict bounding boxes and confidence score for fruit (apple and pear in the case of this study) detected in that cell. We want this confidence score to be high if a fruit exists in a cell, otherwise to be zero, if no fruit is in the cell. More than 100 images of apple and pear trees were taken. Each tree image with approximately 50 fruits, that at the end resulted on more than 5000 images of apple and pear fruits each. Labeling images for training consisted on manually specifying the bounding boxes for fruits, where (x, y) are the center coordinates of the box and (w, h) are width and height. This architecture showed an accuracy of more than 90% fruit detection. Based on correlation between number of visible fruits, detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Processing speed is higher than 20 FPS which is fast enough for any grasping/harvesting robotic arm or other real-time applications. HIGHLIGHTS: Using new convolutional deep learning techniques based on single-shot detectors to detect and count fruits (apple and pear) within the tree canopy

    Design, Development and Evaluation of an Intelligent Animal Repelling System for Crop Protection Based on Embedded Edge-AI

    Get PDF
    In recent years, edge computing has become an essential technology for real-time application development by moving processing and storage capabilities close to end devices, thereby reducing latency, improving response time and ensuring secure data exchange. In this work, we focus on a Smart Agriculture application that aims to protect crops from ungulate attacks, and therefore to significantly reduce production losses, through the creation of virtual fences that take advantage of computer vision and ultrasound emission. Starting with an innovative device capable of generating ultrasound to drive away ungulates and thus protect crops from their attack, this work provides a comprehensive description of the design, development and assessment of an intelligent animal repulsion system that allows to detect and recognize the ungulates as well as generate ultrasonic signals tailored to each species of the ungulate. Taking into account the constraints coming from the rural environment in terms of energy supply and network connectivity, the proposed system is based on IoT platforms that provide a satisfactory compromise between performance, cost and energy consumption. More specifically, in this work, we deployed and evaluated various edge computing devices (Raspberry Pi, with or without a neural compute stick, and NVIDIA Jetson Nano) running real-time object detector (YOLO and Tiny-YOLO) with custom-trained models to identify the most suitable animal recognition HW/SW platform to be integrated with the ultrasound generator. Experimental results show the feasibility of the intelligent animal repelling system through the deployment of the animal detectors on power efficient edge computing devices without compromising the mean average precision and also satisfying real-time requirements. In addition, for each HW/SW platform, the experimental study provides a cost/performance analysis, as well as measurements of the average and peak CPU temperature. Best practices are also discussed and lastly, this article discusses how the combined technology used can help farmers and agronomists in their decision making and management process

    Novel Proportionate Scrutiny On Crop Protection From Creatures By Deep Learning

    Get PDF
    The main objective of this paper is to protect the crop from animal attacks. The conventional techniques have the same kind of security applied to all the types of animals detected based on a Passive IR sensor, and only single-stage protection is applied. The images were captured and identified with the help of machine learning and deep learning techniques. The project was designed with a rectangular farm area. On each side of the entrance, the device was installed to capture the image for processing to identify the animals, based on the animal identification, different levels of security were applied, and that will produce different sounds with different Db levels and variety of dazzling light. This work provides a comprehensive description of the design, development, and assessment of an intelligent animal repelling system that allows for to detection and recognition of the animals. The enhancement is done by different levels of protection and different types of protection based on the classified animals. In initial level protection, making the noise and lightning from the opposite side send the animal out of the farm. If the animals are still on the farm, initiating the next stage that the image will send to the owner. The accuracy of all the methods discussed will be compared based on the complexity of the technique, implementation cost, reciprocating time, and accuracy of animal detection. In recent years, edge computing has become an essential technology for real-time application development by moving processing and storage capabilities close to ending devices, thereby reducing latency, improving response time, and ensuring secure data exchange

    Single-Shot Convolution Neural Networks for Real-Time Fruit Detection Within the Tree

    Get PDF
    Image/video processing for fruit detection in the tree using hard-coded feature extraction algorithms has shown high accuracy on fruit detection during recent years. While accurate, these approaches even with high-end hardware are still computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks architecture based on single-stage detectors. Using deep-learning techniques eliminates the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This architecture takes the input image and divides into AxA grid, where A is a configurable hyper-parameter that defines the fineness of the grid. To each grid cell an image detection and localization algorithm is applied. Each of those cells is responsible to predict bounding boxes and confidence score for fruit (apple and pear in the case of this study) detected in that cell. We want this confidence score to be high if a fruit exists in a cell, otherwise to be zero, if no fruit is in the cell. More than 100 images of apple and pear trees were taken. Each tree image with approximately 50 fruits, that at the end resulted on more than 5000 images of apple and pear fruits each. Labeling images for training consisted on manually specifying the bounding boxes for fruits, where (x, y) are the center coordinates of the box and (w, h) are width and height. This architecture showed an accuracy of more than 90% fruit detection. Based on correlation between number of visible fruits, detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Processing speed is higher than 20 FPS which is fast enough for any grasping/harvesting robotic arm or other real-time applications.HIGHLIGHTSUsing new convolutional deep learning techniques based on single-shot detectors to detect and count fruits (apple and pear) within the tree canopy

    Fruit Detection and Pose Estimation for Grape Cluster–Harvesting Robot Using Binocular Imagery Based on Deep Neural Networks

    Get PDF
    Reliable and robust fruit-detection algorithms in nonstructural environments are essential for the efficient use of harvesting robots. The pose of fruits is crucial to guide robots to approach target fruits for collision-free picking. To achieve accurate picking, this study investigates an approach to detect fruit and estimate its pose. First, the state-of-the-art mask region convolutional neural network (Mask R-CNN) is deployed to segment binocular images to output the mask image of the target fruit. Next, a grape point cloud extracted from the images was filtered and denoised to obtain an accurate grape point cloud. Finally, the accurate grape point cloud was used with the RANSAC algorithm for grape cylinder model fitting, and the axis of the cylinder model was used to estimate the pose of the grape. A dataset was acquired in a vineyard to evaluate the performance of the proposed approach in a nonstructural environment. The fruit detection results of 210 test images show that the average precision, recall, and intersection over union (IOU) are 89.53, 95.33, and 82.00%, respectively. The detection and point cloud segmentation for each grape took approximately 1.7 s. The demonstrated performance of the developed method indicates that it can be applied to grape-harvesting robots

    Cotton boll localization method based on point annotation and multi-scale fusion

    Get PDF
    Cotton is an important source of fiber. The precise and intelligent management of cotton fields is the top priority of cotton production. Many intelligent management methods of cotton fields are inseparable from cotton boll localization, such as automated cotton picking, sustainable boll pest control, boll maturity analysis, and yield estimation. At present, object detection methods are widely used for crop localization. However, object detection methods require relatively expensive bounding box annotations for supervised learning, and some non-object regions are inevitably included in the annotated bounding boxes. The features of these non-object regions may cause misjudgment by the network model. Unlike bounding box annotations, point annotations are less expensive to label and the annotated points are only likely to belong to the object. Considering these advantages of point annotation, a point annotation-based multi-scale cotton boll localization method is proposed, called MCBLNet. It is mainly composed of scene encoding for feature extraction, location decoding for localization prediction and localization map fusion for multi-scale information association. To evaluate the robustness and accuracy of MCBLNet, we conduct experiments on our constructed cotton boll localization (CBL) dataset (300 in-field cotton boll images). Experimental results demonstrate that MCBLNet method improves by 49.4% average precision on CBL dataset compared with typically point-based localization state-of-the-arts. Additionally, MCBLNet method outperforms or at least comparable with common object detection methods

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others

    Sustainable Fruit Growing

    Get PDF
    Fruit production has faced many challenges in recent years as society seeks to increase fruit consumption while increasing safety and reducing the harmful effects of intensive farming practices (e.g., pesticides and fertilizers). In the last 50 years, the population has more than doubled and is expected to grow to 9 billion people by 2050. Per capita consumption of fruit is also increasing during this time and the global fruit industry is facing a major challenge to produce enough fruit in quantity and quality. The need for sustainable production of nutritious food is critical for human and environmental health.This book provides some answers to people who are increasingly concerned about the sustainability of fruit production and the fruit industry as a whole

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE
    corecore