308 research outputs found

    Hyperspectral Imaging from Ground Based Mobile Platforms and Applications in Precision Agriculture

    Get PDF
    This thesis focuses on the use of line scanning hyperspectral sensors on mobile ground based platforms and applying them to agricultural applications. First this work deals with the geometric and radiometric calibration and correction of acquired hyperspectral data. When operating at low altitudes, changing lighting conditions are common and inevitable, complicating the retrieval of a surface's reflectance, which is solely a function of its physical structure and chemical composition. Therefore, this thesis contributes the evaluation of an approach to compensate for changes in illumination and obtain reflectance that is less labour intensive than traditional empirical methods. Convenient field protocols are produced that only require a representative set of illumination and reflectance spectral samples. In addition, a method for determining a line scanning camera's rigid 6 degree of freedom (DOF) offset and uncertainty with respect to a navigation system is developed, enabling accurate georegistration and sensor fusion. The thesis then applies the data captured from the platform to two different agricultural applications. The first is a self-supervised weed detection framework that allows training of a per-pixel classifier using hyperspectral data without manual labelling. The experiments support the effectiveness of the framework, rivalling classifiers trained on hand labelled training data. Then the thesis demonstrates the mapping of mango maturity using hyperspectral data on an orchard wide scale using efficient image scanning techniques, which is a world first result. A novel classification, regression and mapping pipeline is proposed to generate per tree mango maturity averages. The results confirm that maturity prediction in mango orchards is possible in natural daylight using a hyperspectral camera, despite complex micro-illumination-climates under the canopy

    Active Vision and Surface Reconstruction for 3D Plant Shoot Modelling

    Get PDF
    Plant phenotyping is the quantitative description of a plant’s physiological, biochemical and anatomical status which can be used in trait selection and helps to provide mechanisms to link underlying genetics with yield. Here, an active vision- based pipeline is presented which aims to contribute to reducing the bottleneck associated with phenotyping of architectural traits. The pipeline provides a fully automated response to photometric data acquisition and the recovery of three-dimensional (3D) models of plants without the dependency of botanical expertise, whilst ensuring a non-intrusive and non-destructive approach. Access to complete and accurate 3D models of plants supports computation of a wide variety of structural measurements. An Active Vision Cell (AVC) consisting of a camera-mounted robot arm plus combined software interface and a novel surface reconstruction algorithm is proposed. This pipeline provides a robust, flexible and accurate method for automating the 3D reconstruction of plants. The reconstruction algorithm can reduce noise and provides a promising and extendable framework for high throughput phenotyping, improving current state-of-the-art methods. Furthermore, the pipeline can be applied to any plant species or form due to the application of an active vision framework combined with the automatic selection of key parameters for surface reconstruction

    High-throughput phenotyping technology for corn ears

    Get PDF
    The phenotype of any organism, or as in this case, plants, includes traits or characteristics that can be measured using a technical procedure. Phenotyping is an important activity in plant breeding, since it gives breeders an observable representation of the plant’s genetic code, which is called the genotype. The word phenotype originates from the Greek word “phainein” which means “to show” and the word “typos” which means “type”. Ideally, the development of phenotyping technologies should be in lockstep with genotyping technologies, but unfortunately it is not; currently there exists a major discrepancy between the technological sophistication of genotyping versus phenotyping, and the gap is getting wider. Whereas genotyping has become a high-throughput low-cost standardized procedure, phenotyping still comprises ample manual measurements which are time consuming, tedious, and error prone. The project as conducted here aims at alleviating this problem; To aid breeders, a method was devised that allows for high-throughput phenotyping of corn ears, based on an existing imaging arrangement that produces frontal views of the ears. This thesis describes the development of machine vision algorithms that measure overall ear parameters such as ear length, ear diameter, and cap percentage (the proportion of the ear that features kernels versus the barren area). The main image processing functions used here were segmentation, skewness correction, morphological operation and image registration. To obtain a kernel count, an “ear map” was constructed using both a morphological operation and a feature matching operation. The main challenge for the morphological operation was to accurately select only kernel rows that are frontally exposed in each single image. This issue is addressed in this project by developing an algorithm of shadow recognition. The main challenge for the feature-matching operation was to detect and match image feature points. This issue was addressed by applying the algorithms of Harris’s Conner detection and SIFT descriptor. Once the ear map is created, many other morphological kernel parameters (area, location, circumference, to name a few) can be determined. Remaining challenges in this research are pointed out, including sample choice, apparatus modification and algorithm improvement. Suggestions and recommendations for future work are also provided

    Above and below ground phenotyping methods for corn

    Get PDF
    Over the past few decades, both researchers and commercial farmers have attempted to find means to accurately predict crop yields during the growing season. Modern science has been successful in mapping the genes of crops to help aid in drought protection, germination timing, and frost protection. These biological traits have assured crop seed strength and protection, enabling higher yields. However, many other physical traits such as plant height, leaflet count, color, germination rate, and biomass are indicators of the plants’ overall health and future yields. This research provided a means in which an operator can measure an individual trait of a corn plant, and predict yield. The first study used an imaging system comprising a CCD camera and a slanted laser sheet, to measure the diameter of the corn stalk. After processing the data, the diameters of the corn stalks were shown to be positively correlated with the grain weight of the corn ear. Therefore, the system allows the farmer to predict the per-plant-yield (PPY) based on measuring the diameters of the corn stalks. In addition, the measurement system is easy to use, and is not affected by environmental conditions such as ambient light. The accuracy of the optical system was determined by comparing the measurements against those taken with a pair of digital calipers. 26.7 and 33.5 mm PVC tubes were used to assess the measurement accuracy. The optical system measured the diameter of the PVC tubes with an accuracy of 98% while varying the location of the PVC tubes within the measurement zone. The same optical system was used in the greenhouse to measure diameters of corn stalks, where the accuracy decreased to 84% due to measurement location variability created by the corn stalks. The theme of phenotyping was extended in the second part of the research. While the corn stalk data could be used to predict the PPY, the below ground root system is indicative of iii the plants’ stability, and of its ability to reach nutrients and water. A “Rhizotron” was constructed, which consists of a soil volume, visible behind a vertical glass pane. By growing a single corn plant in the Rhizotron, its roots are being pushed against the glass pane which allows for studying root development over time. A key parameter is the root angle, which has been shown highly influential in explaining the historic yield increases in the Midwest over the past decades (Hammer, et al., 2002). Since the roots grow behind a glass pane, the challenge was to obtain high resolution images of the root systems without reflections created on the glass. A camera was mounted on a linear actuator that allowed for taking images that can be overlapped to produce a complete mosaic of the root system

    Remote sensing image fusion on 3D scenarios: A review of applications for agriculture and forestry

    Get PDF
    Three-dimensional (3D) image mapping of real-world scenarios has a great potential to provide the user with a more accurate scene understanding. This will enable, among others, unsupervised automatic sampling of meaningful material classes from the target area for adaptive semi-supervised deep learning techniques. This path is already being taken by the recent and fast-developing research in computational fields, however, some issues related to computationally expensive processes in the integration of multi-source sensing data remain. Recent studies focused on Earth observation and characterization are enhanced by the proliferation of Unmanned Aerial Vehicles (UAV) and sensors able to capture massive datasets with a high spatial resolution. In this scope, many approaches have been presented for 3D modeling, remote sensing, image processing and mapping, and multi-source data fusion. This survey aims to present a summary of previous work according to the most relevant contributions for the reconstruction and analysis of 3D models of real scenarios using multispectral, thermal and hyperspectral imagery. Surveyed applications are focused on agriculture and forestry since these fields concentrate most applications and are widely studied. Many challenges are currently being overcome by recent methods based on the reconstruction of multi-sensorial 3D scenarios. In parallel, the processing of large image datasets has recently been accelerated by General-Purpose Graphics Processing Unit (GPGPU) approaches that are also summarized in this work. Finally, as a conclusion, some open issues and future research directions are presented.European Commission 1381202-GEU PYC20-RE-005-UJA IEG-2021Junta de Andalucia 1381202-GEU PYC20-RE-005-UJA IEG-2021Instituto de Estudios GiennesesEuropean CommissionSpanish Government UIDB/04033/2020DATI-Digital Agriculture TechnologiesPortuguese Foundation for Science and Technology 1381202-GEU FPU19/0010

    WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming

    Full text link
    We present a novel weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.Comment: 25 pages, 14 figures, MDPI Remote Sensin

    A survey of image-based computational learning techniques for frost detection in plants

    Get PDF
    Frost damage is one of the major concerns for crop growers as it can impact the growth of the plants and hence, yields. Early detection of frost can help farmers mitigating its impact. In the past, frost detection was a manual or visual process. Image-based techniques are increasingly being used to understand frost development in plants and automatic assessment of damage resulting from frost. This research presents a comprehensive survey of the state-of the-art methods applied to detect and analyse frost stress in plants. We identify three broad computational learning approaches i.e., statistical, traditional machine learning and deep learning, applied to images to detect and analyse frost in plants. We propose a novel taxonomy to classify the existing studies based on several attributes. This taxonomy has been developed to classify the major characteristics of a significant body of published research. In this survey, we profile 80 relevant papers based on the proposed taxonomy. We thoroughly analyse and discuss the techniques used in the various approaches, i.e., data acquisition, data preparation, feature extraction, computational learning, and evaluation. We summarise the current challenges and discuss the opportunities for future research and development in this area including in-field advanced artificial intelligence systems for real-time frost monitoring

    Automatic plant features recognition using stereo vision for crop monitoring

    Get PDF
    Machine vision and robotic technologies have potential to accurately monitor plant parameters which reflect plant stress and water requirements, for use in farm management decisions. However, autonomous identification of individual plant leaves on a growing plant under natural conditions is a challenging task for vision-guided agricultural robots, due to the complexity of data relating to various stage of growth and ambient environmental conditions. There are numerous machine vision studies that are concerned with describing the shape of leaves that are individually-presented to a camera. The purpose of these studies is to identify plant species, or for the autonomous detection of multiple leaves from small seedlings under greenhouse conditions. Machine vision-based detection of individual leaves and challenges presented by overlapping leaves on a developed plant canopy using depth perception properties under natural outdoor conditions is yet to be reported. Stereo vision has recently emerged for use in a variety of agricultural applications and is expected to provide an accurate method for plant segmentation and identification which can benefit from depth properties and robustness. This thesis presents a plant leaf extraction algorithm using a stereo vision sensor. This algorithm is used on multiple leaf segmentation and overlapping leaves separation using a combination of image features, specifically colour, shape and depth. The separation between the connected and the overlapping leaves relies on the measurement of the discontinuity in depth gradient for the disparity maps. Two techniques have been developed to implement this task based on global and local measurement. A geometrical plane from each segmented leaf can be extracted and used to parameterise a 3D model of the plant image and to measure the inclination angle of each individual leaf. The stem and branch segmentation and counting method was developed based on the vesselness measure and Hough transform technique. Furthermore, a method for reconstructing the segmented parts of hibiscus plants is presented and a 2.5D model is generated for the plant. Experimental tests were conducted with two different selected plants: cotton of different sizes, and hibiscus, in an outdoor environment under varying light conditions. The proposed algorithm was evaluated using 272 cotton and hibiscus plant images. The results show an observed enhancement in leaf detection when utilising depth features, where many leaves in various positions and shapes (single, touching and overlapping) were detected successfully. Depth properties were more effective in separating between occluded and overlapping leaves with a high separation rate of 84% and these can be detected automatically without adding any artificial tags on the leaf boundaries. The results exhibit an acceptable segmentation rate of 78% for individual plant leaves thereby differentiating the leaves from their complex backgrounds and from each other. The results present almost identical performance for both species under various lighting and environmental conditions. For the stem and branch detection algorithm, experimental tests were conducted on 64 colour images of both species under different environmental conditions. The results show higher stem and branch segmentation rates for hibiscus indoor images (82%) compared to hibiscus outdoor images (49.5%) and cotton images (21%). The segmentation and counting of plant features could provide accurate estimation about plant growth parameters which can be beneficial for many agricultural tasks and applications
    • …
    corecore