228 research outputs found

    Automated crop plant counting from very high-resolution aerial imagery

    Get PDF
    Knowing before harvesting how many plants have emerged and how they are growing is key in optimizing labour and efficient use of resources. Unmanned aerial vehicles (UAV) are a useful tool for fast and cost efficient data acquisition. However, imagery need to be converted into operational spatial products that can be further used by crop producers to have insight in the spatial distribution of the number of plants in the field. In this research, an automated method for counting plants from very high-resolution UAV imagery is addressed. The proposed method uses machine vision—Excess Green Index and Otsu’s method—and transfer learning using convolutional neural networks to identify and count plants. The integrated methods have been implemented to count 10 weeks old spinach plants in an experimental field with a surface area of 3.2 ha. Validation data of plant counts were available for 1/8 of the surface area. The results showed that the proposed methodology can count plants with an accuracy of 95% for a spatial resolution of 8 mm/pixel in an area up to 172 m2. Moreover, when the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%. Finally, a total amount of 170 000 plants in an area of 3.5 ha with an error of 42.5% was computed. The study shows that it is feasible to count individual plants using UAV-based off-the-shelf products and that via machine vision/learning algorithms it is possible to translate image data in non-expert practical information.</p

    An Agave Counting Methodology Based on Mathematical Morphology and Images Acquired through Unmanned Aerial Vehicles

    Get PDF
    Blue agave is an important commercial crop in Mexico, and it is the main source of the traditional mexican beverage known as tequila. The variety of blue agave crop known as Tequilana Weber is a crucial element for tequila agribusiness and the agricultural economy in Mexico. The number of agave plants in the field is one of the main parameters for estimating production of tequila. In this manuscript, we describe a mathematical morphology-based algorithm that addresses the agave automatic counting task. The proposed methodology was applied to a set of real images collected using an Unmanned Aerial Vehicle equipped with a digital Red-Green-Blue (RGB) camera. The number of plants automatically identified in the collected images was compared to the number of plants counted by hand. Accuracy of the proposed algorithm depended on the size heterogeneity of plants in the field and illumination. Accuracy ranged from 0.8309 to 0.9806, and performance of the proposed algorithm was satisfactory.This research was supported by the Spanish Ministerio de EconomĂ­a y Competitividad, contract TIN2015-64395-R (MINECO/FEDER, UE), as well as by the Basque Government, contract IT900-16. This work was also supported in part by CONACYT (Mexico), grant 258033

    Coffee Tree Detection Using Convolutional Neural Network

    Get PDF
    Identifying plants is an important field in the environment because of their roles in the continuation of human existence. Finding a plant by using the traditional methods such as looking at its physical properties is a burdensome task. Thus, several computational-based methods have been introduced for detecting trees. In this study we constructed the coffee tree dataset due there is no publicly available coffee tree dataset for detection and classification of the coffee tree in orchard environments for what this tree has a role in health, industrial and agricultural fields, and raising the wheel of economic development. Many machine learning algorithms have been used to detect and classify trees which resulted in reliable results. In this study, we presented a deep learning-based approach, in particular a convolutional neural network, for coffee tree detection and classification. The current study focused on providing a dataset for the detection and classification of coffee trees and improving the efficiency of the algorithm used in the detection and classification model. This study achieved the best results, the proposed system achieved an accuracy of 0.97%

    Boosting precision crop protection towards agriculture 5.0 via machine learning and emerging technologies: A contextual review

    Get PDF
    Crop protection is a key activity for the sustainability and feasibility of agriculture in a current context of climate change, which is causing the destabilization of agricultural practices and an increase in the incidence of current or invasive pests, and a growing world population that requires guaranteeing the food supply chain and ensuring food security. In view of these events, this article provides a contextual review in six sections on the role of artificial intelligence (AI), machine learning (ML) and other emerging technologies to solve current and future challenges of crop protection. Over time, crop protection has progressed from a primitive agriculture 1.0 (Ag1.0) through various technological developments to reach a level of maturity closelyin line with Ag5.0 (section 1), which is characterized by successfully leveraging ML capacity and modern agricultural devices and machines that perceive, analyze and actuate following the main stages of precision crop protection (section 2). Section 3 presents a taxonomy of ML algorithms that support the development and implementation of precision crop protection, while section 4 analyses the scientific impact of ML on the basis of an extensive bibliometric study of >120 algorithms, outlining the most widely used ML and deep learning (DL) techniques currently applied in relevant case studies on the detection and control of crop diseases, weeds and plagues. Section 5 describes 39 emerging technologies in the fields of smart sensors and other advanced hardware devices, telecommunications, proximal and remote sensing, and AI-based robotics that will foreseeably lead the next generation of perception-based, decision-making and actuation systems for digitized, smart and real-time crop protection in a realistic Ag5.0. Finally, section 6 highlights the main conclusions and final remarks

    Automatic detection of Acacia longifolia invasive species based on UAV-acquired aerial imagery

    Get PDF
    The Acacia longifolia species is known for its rapid growth and dissemination, causing loss of biodiversity in the affected areas. In order to avoid the uncontrolled spread of this species, it is important to effectively monitor its distribution on the agroforestry regions. For this purpose, this paper proposes the use of Convolutional Neural Networks (CNN) for the detection of Acacia longifolia, from images acquired by an unmanned aerial vehicle. Two models based on the same CNN architecture were elaborated. One classifies image patches into one of nine possible classes, which are later converted into a binary model; this model presented an accuracy of and in the validation and training sets, respectively. The second model was trained directly for binary classification and showed an accuracy of and for the validation and test sets, respectively. The results show that the use of multiple classes, useful to provide the aerial vehicle with richer semantic information regarding the environment, does not hamper the accuracy of Acacia longifolia detection in the classifier’s primary task. The presented system also includes a method for increasing classification’s accuracy by consulting an expert to review the model’s predictions on an automatically selected sub-set of the samples.info:eu-repo/semantics/publishedVersio

    Early corn stand count of different cropping systems using UAV-imagery and deep learning

    Get PDF
    Optimum plant stand density and uniformity is vital in order to maximize corn (Zea mays L.) yield potential. Assessment of stand density can occur shortly after seedlings begin to emerge, allowing for timely replant decisions. The conventional methods for evaluating an early plant stand rely on manual measurement and visual observation, which are time consuming, subjective because of the small sampling areas used, and unable to capture field-scale spatial variability. This study aimed to evaluate the feasibility of an unmanned aerial vehicle (UAV)-based imaging system for estimating early corn stand count in three cropping systems (CS) with different tillage and crop rotation practices. A UAV equipped with an on-board RGB camera was used to collect imagery of corn seedlings (~14 days after planting) of CS, i.e., minimum-till corn-soybean rotation (MTCS), no-till corn-soybean rotation (NTCS), and no-till corn-corn rotation with cover crop implementation (NTCC). An image processing workflow based on a deep learning (DL) model, U-Net, was developed for plant segmentation and stand count estimation. Results showed that the DL model performed best in segmenting seedlings in MTCS, followed by NTCS and NTCC. Similarly, accuracy for stand count estimation was highest in MTCS (R2 = 0.95), followed by NTCS (0.94) and NTCC (0.92). Differences by CS were related to amount and distribution of soil surface residue cover, with increasing residue generally reducing the performance of the proposed method in stand count estimation. Thus, the feasibility of using UAV imagery and DL modeling for estimating early corn stand count is qualified influenced by soil and crop management practices

    Automatic Identification and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review

    Get PDF
    Disease diagnosis is one of the major tasks for increasing food production in agriculture. Although precision agriculture (PA) takes less time and provides a more precise application of agricultural activities, the detection of disease using an Unmanned Aerial System (UAS) is a challenging task. Several Unmanned Aerial Vehicles (UAVs) and sensors have been used for this purpose. The UAVs’ platforms and their peripherals have their own limitations in accurately diagnosing plant diseases. Several types of image processing software are available for vignetting and orthorectification. The training and validation of datasets are important characteristics of data analysis. Currently, different algorithms and architectures of machine learning models are used to classify and detect plant diseases. These models help in image segmentation and feature extractions to interpret results. Researchers also use the values of vegetative indices, such as Normalized Difference Vegetative Index (NDVI), Crop Water Stress Index (CWSI), etc., acquired from different multispectral and hyperspectral sensors to fit into the statistical models to deliver results. There are still various drifts in the automatic detection of plant diseases as imaging sensors are limited by their own spectral bandwidth, resolution, background noise of the image, etc. The future of crop health monitoring using UAVs should include a gimble consisting of multiple sensors, large datasets for training and validation, the development of site-specific irradiance systems, and so on. This review briefly highlights the advantages of automatic detection of plant diseases to the growers

    Olive Tree Biovolume from UAV Multi-Resolution Image Segmentation with Mask R-CNN

    Get PDF
    Olive tree growing is an important economic activity in many countries, mostly in the Mediterranean Basin, Argentina, Chile, Australia, and California. Although recent intensification techniques organize olive groves in hedgerows, most olive groves are rainfed and the trees are scattered (as in Spain and Italy, which account for 50% of the world’s olive oil production). Accurate measurement of trees biovolume is a first step to monitor their performance in olive production and health. In this work, we use one of the most accurate deep learning instance segmentation methods (Mask R-CNN) and unmanned aerial vehicles (UAV) images for olive tree crown and shadow segmentation (OTCS) to further estimate the biovolume of individual trees. We evaluated our approach on images with different spectral bands (red, green, blue, and near infrared) and vegetation indices (normalized difference vegetation index—NDVI—and green normalized difference vegetation index—GNDVI). The performance of red-green-blue (RGB) images were assessed at two spatial resolutions 3 cm/pixel and 13 cm/pixel, while NDVI and GNDV images were only at 13 cm/pixel. All trained Mask R-CNN-based models showed high performance in the tree crown segmentation, particularly when using the fusion of all dataset in GNDVI and NDVI (F1-measure from 95% to 98%). The comparison in a subset of trees of our estimated biovolume with ground truth measurements showed an average accuracy of 82%. Our results support the use of NDVI and GNDVI spectral indices for the accurate estimation of the biovolume of scattered trees, such as olive trees, in UAV images.Russian Foundation for Basic Research (RFBR) 19-01-00215 20-07-00370European Research Council (ERC) European Commission 647038Spanish Government RYC-2015-18136Consejeria de Economia, Conocimiento y Universidad de la Junta de Andalucia P18-RT-1927DETECTOR A-RNM-256-UGR18European Research and Development Funds (ERDF) progra

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others
    • …
    corecore