197 research outputs found

    Remote Sensing for Precision Nitrogen Management

    Get PDF
    This book focuses on the fundamental and applied research of the non-destructive estimation and diagnosis of crop leaf and plant nitrogen status and in-season nitrogen management strategies based on leaf sensors, proximal canopy sensors, unmanned aerial vehicle remote sensing, manned aerial remote sensing and satellite remote sensing technologies. Statistical and machine learning methods are used to predict plant-nitrogen-related parameters with sensor data or sensor data together with soil, landscape, weather and/or management information. Different sensing technologies or different modelling approaches are compared and evaluated. Strategies are developed to use crop sensing data for in-season nitrogen recommendations to improve nitrogen use efficiency and protect the environment

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others

    Developing affordable high-throughput plant phenotyping methods for breeding of cereals and tuber crops

    Get PDF
    High-throughput plant phenotyping (HTPP) is a fast, accurate, and non-destructive process for evaluating plants' health and environmental adaptability. HTPP accelerates the identification of agronomic traits of interest, eliminates subjectivism (which is innate to humans), and facilitates the development of adapted genotypes. Current HTPP methods often rely on imaging sensors and computer vision both in the field and under controlled (indoor) conditions. However, their use is limited by the costs and complexity of the necessary instrumentation, data analysis tools, and software. This issue could be overcome by developing more cost-efficient and user-friendly methods that let breeders, farmers, and stakeholders access the benefits of HTPP. To assist such efforts, this thesis presents an ensemble of dedicated affordable phenotyping methods using RGB imaging for a range of key applications under controlled conditions.  The affordable Phenocave imaging system for use in controlled conditions was developed to facilitate studies on the effects of abiotic stresses by gathering data on important plant characteristics related to growth, yield, and adaptation to growing conditions and cultivation systems. Phenocave supports imaging sensors including visible (RGB), spectroscopic (multispectral and hyperspectral), and thermal imaging. Additionally, a pipeline for RGB image analysis was implemented as a plugin for the free and easy-to-use software ImageJ. This plugin has since proven to be an accurate alternative to conventional measurements that produces highly reproducible results. A subsequent study was conducted to evaluate the effects of heat and drought stress on plant growth and grain nutrient composition in wheat, an important staple cereal in Sweden. The effects of stress on plant growth were evaluated using image analysis, while stress-induced changes in the abundance of key plant compounds were evaluated by analyzing the nutrient composition of grains via chromatography. This led to the discovery of genotypes whose harvest quality remains stable under heat and drought stress. The next objective was to evaluate biotic stress; for this case, the effect of the fungal disease Fusarium head blight (FHB) that affects grain development in wheat was investigated. For this purpose, seed phenotyping parameters were used to determine the components and settings of a statistical model, which predicts the occurrence of FHB. The results reveal that grain morphology evaluations, such as length and width, were found to be significantly affected by the disease. Another study was carried out to estimate the disease severity of the common scab (CS) in potatoes, a widely popular food source. CS occurs on the tubers and reduces their visual appeal, significantly affecting their market value. Tubers were analyzed by a deep learning-based method to estimate disease lesion areas caused by CS. Results showed a high correlation between the predictions and expert visual scorings of the disease and proved to be a potential tool for the selection of genotypes that fulfill the market standards and resistance to CS. Both case studies highlight the role of imaging in plant health monitoring and its integration into the larger picture of plant health management.  The methods presented in this work are a starting point for bridging the gap between costs and accessibility to imaging technology. These are affordable and user-friendly resources for generating pivotal knowledge on plant development and genotype selection. In the future, image acquisition of all the methods can be integrated into the Phenocave system, potentially allowing for a more automated and efficient plant health monitoring process, leading to the identification of tolerant genotypes to biotic and abiotic stresses

    UAVs for the Environmental Sciences

    Get PDF
    This book gives an overview of the usage of UAVs in environmental sciences covering technical basics, data acquisition with different sensors, data processing schemes and illustrating various examples of application

    Artificial intelligence and image processing applications for high-throughput phenotyping

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceMitchell L NeilsenThe areas of Computer Vision and Scientific Computing have witnessed rapid growth in the last decade with the fields of industrial robotics, automotive and healthcare acting as the primary vehicles for research and advancement. However, related research in other fields, such as agriculture, remains an understudied problem. This dissertation explores the application of Computer Vision and Scientific Computing in an agricultural domain known as High-throughput Phenotyping (HTP). HTP is the assessment of complex seed traits such as growth, development, tolerance, resistance, ecology, yield, and the measurement of parameters that form more complex traits. The dissertation makes the following contributions: The first contribution is the development of algorithms to estimate morphometric traits such as length, width, area, and seed kernel count using 3-D graphics and static image processing, and the extension of existing algorithms for the same. The second contribution is the development of lightweight frameworks to aid in synthetic image dataset creation and image cropping for deep neural networks in HTP. Deep neural networks require a plethora of training data to yield results of the highest quality. However, no such training datasets are readily available for HTP research, especially on seed kernels. The proposed synthetic image generation framework helps generate a profusion of training data at will to train neural networks from a meager samples of seed kernels. Besides requiring large quantities of data, deep neural networks require the input to be a certain size. However, not all available data are in the size required by the deep neural networks. The proposed image cropper helps to resize images without resulting in any distortion, thereby, making image data fit for consumption. The third contribution is the design and analysis of supervised and self-supervised neural network architectures trained on synthetic images to perform the tasks of seed kernel classification, counting and morphometry. In the area of supervised image classification, state-of-the-art neural network models of VGG-16, VGG-19 and ResNet-101 are investigated. A Simple framework for Contrastive Learning of visual Representations (SimCLR) [137], Momentum Contrast (MoCo) [55] and Bootstrap Your Own Latent (BYOL) [123] are leveraged for self-supervised image classification. The instance-based segmentation deep neural network models of Mask R-CNN and YOLO are utilized to perform the tasks of seed kernel classification, segmentation and counting. The results demonstrate the feasibility of deep neural networks for their respective tasks of classification and instance segmentation. In addition to estimating seed kernel count from static images, algorithms that aid in seed kernel counting from videos are proposed and analyzed. Proposed is an algorithm that creates a slit image which can be analyzed to estimate seed count. Upon the creation of the slit image, the video is no longer required to estimate seed count, thereby, significantly lowering the computational resources required for the estimation. The fourth contribution is the development of an end-to-end, automated image capture system for single seed kernel analysis. In addition to estimating length and width from 2-D images, the proposed system estimates the volume of a seed kernel from 2-D images using the technique of volume sculpting. The relative standard deviation of the results produced by the proposed technique is lower (better) than the relative standard deviation of the results produced by volumetric estimation using the ellipsoid slicing technique. The fifth contribution is the development of image processing algorithms to provide feature enhancements to mobile applications to improve upon on-site phenotyping capabilities. Algorithms for two features of high value namely, leaf angle estimation and fractional plant cover estimation are developed. The leaf angle estimation feature estimates the angle between stem and leaf for images captured using mobile phone cameras whereas fractional plant cover is to determine companion plants i.e., plants that are able to co-exist and mutually benefit. The proposed techniques, frameworks and findings lay a solid foundation for future Computer Vision and Scientific Computing research in the domain of agriculture. The contributions are significant since the dissertation not only proposes techniques, but also develops low-cost end-to-end frameworks to leverage the proposed techniques in a scalable fashion

    An investigation of change in drone practices in broadacre farming environments

    Get PDF
    The application of drones in broadacre farming is influenced by novel and emergent factors. Drone technology is subject to legal, financial, social, and technical constraints that affect the Agri-tech sector. This research showed that emerging improvements to drone technology influence the analysis of precision data resulting in disparate and asymmetrically flawed Ag-tech outputs. The novelty of this thesis is that it examines the changes in drone technology through the lens of entropic decay. It considers the planning and controlling of an organisation’s resources to minimise harmful effects through systems change. The rapid advances in drone technology have outpaced the systematic approaches that precision agriculture insists is the backbone of reliable ongoing decision-making. Different models and brands take data from different heights, at different times of the day, and with flight of differing velocities. Drone data is in a state of decay, no longer equally comparable to past years’ harvest and crop data and are now mixed into a blended environment of brand-specific variations in height, image resolution, air speed, and optics. This thesis investigates the problem of the rapid emergence of image-capture technology in drones and the corresponding shift away from the established measurements and comparisons used in precision agriculture. New capabilities are applied in an ad hoc manner as different features are rushed to market. At the same time existing practices are subtly changed to suit individual technology capability. The result is a loose collection of technically superior drone imagery, with a corresponding mismatch of year-to-year agricultural data. The challenge is to understand and identify the difference between uniformly accepted technological advance, and market-driven changes that demonstrate entropic decay. The goal of this research is to identify best practice approaches for UAV deployment for broadacre farming. This study investigated the benefits of a range of characteristics to optimise data collection technologies. It identified widespread discrepancies demonstrating broadening decay on precision agriculture and productivity. The pace of drone development is so rapidly different from mainstream agricultural practices that the once reliable reliance upon yearly crop data no longer shares statistically comparable metrics. Whilst farmers have relied upon decades of satellite data that has used the same optics, time of day and flight paths for many years, the innovations that drive increasingly smarter drone technologies are also highly problematic since they render each successive past year’s crop metrics as outdated in terms of sophistication, detail, and accuracy. In five years, the standardised height for recording crop data has changed four times. New innovations, coupled with new rules and regulations have altered the once reliable practice of recording crop data. In addition, the cost of entry in adopting new drone technology is sufficiently varied that agriculturalists are acquiring multiple versions of different drone UAVs with variable camera and sensor settings, and vastly different approaches in terms of flight records, data management, and recorded indices. Without addressing this problem, the true benefits of optimization through machine learning are prevented from improving harvest outcomes for broadacre farming. The key findings of this research reveal a complex, constantly morphing environment that is seeking to build digital trust and reliability in an evolving global market in the face of rapidly changing technology, regulations, standards, networks, and knowledge. The once reliable discipline of precision agriculture is now a fractured melting pot of “first to market” innovations and highly competitive sellers. The future of drone technology is destined for further uncertainty as it struggles to establish a level of maturity that can return broadacre farming to consistent global outcomes

    Fine-scale Inventory of Forest Biomass with Ground-based LiDAR

    Get PDF
    Biomass measurement provides a baseline for ecosystem valuation required by modern forest management. The advent of ground-based LiDAR technology, renowned for 3D sampling resolution, has been altering the routines of biomass inventory. The thesis develops a set of innovative approaches in support of fine-scale biomass inventory, including automatic extraction of stem statistics, robust delineation of plot biomass components, accurate classification of individual tree species, and repeatable scanning of plot trees using a lightweight scanning system. Main achievements in terms of accuracy are a relative root mean square error of 11% for stem volume extraction, a mean classification accuracy of 0.72 for plot wood components, and a classification accuracy of 92% among seven tree species. The results indicate the technical feasibility of biomass delineation and monitoring from plot-level and multi-species point cloud datasets, whereas point occlusion and lack of fine-scale validation dataset are current challenges for biomass 3D analysis from ground.S.G.S. International Tuition Award from the University of Lethbridge The Dean's Scholarship from the University of Lethbridge Campus Alberta Innovates Program NSERC Discovery Grants Progra

    Just-in-time Pastureland Trait Estimation for Silage Optimization, under Limited Data Constraints

    Get PDF
    To ensure that pasture-based farming meets production and environmental targets for a growing population under increasing resource constraints, producers need to know pastureland traits. Current proximal pastureland trait prediction methods largely rely on vegetation indices to determine biomass and moisture content. The development of new techniques relies on the challenging task of collecting labelled pastureland data, leading to small datasets. Classical computer vision has already been applied to weed identification and recognition of fruit blemishes using morphological features, but machine learning algorithms can parameterise models without the provision of explicit features, and deep learning can extract even more abstract knowledge although typically this is assumed to be based around very large datasets. This work hypothesises that through the advantages of state-of-the-art deep learning systems, pastureland crop traits can be accurately assessed in a just-in-time fashion, based on data retrieved from an inexpensive sensor platform, under the constraint of limited amounts of labelled data. However the challenges to achieve this overall goal are great, and for applications such as just-in-time yield and moisture estimation for farm-machinery, this work must bring together systems development, knowledge of good pastureland practice, and also techniques for handling low-volume datasets in a machine learning context. Given these challenges, this thesis makes a number of contributions. The first of these is a comprehensive literature review, relating pastureland traits to ruminant nutrient requirements and exploring trait estimation methods, from contact to remote sensing methods, including details of vegetation indices and the sensors and techniques required to use them. The second major contribution is a high-level specification of a platform for collecting and labelling pastureland data. This includes the collection of four-channel Blue, Green, Red and NIR (VISNIR) images, narrowband data, height and temperature differential, using inexpensive proximal sensors and provides a basis for holistic data analysis. Physical data platforms built around this specification were created to collect and label pastureland data, involving computer scientists, agricultural, mechanical and electronic engineers, and biologists from academia and industry, working with farmers. Using the developed platform and a set of protocols for data collection, a further contribution of this work was the collection of a multi-sensor multimodal dataset for pastureland properties. This was made up of four-channel image data, height data, thermal data, Global Positioning System (GPS) and hyperspectral data, and is available and labelled with biomass (Kg/Ha) and percentage dry matter, ready for use in deep learning. However, the most notable contribution of this work was a systematic investigation of various machine learning methods applied to the collected data in order to maximise model performance under the constraints indicated above. The initial set of models focused on collected hyperspectral datasets. However, due to their relative complexity in real-time deployment, the focus was instead on models that could best leverage image data. The main body of these models centred on image processing methods and, in particular, the use of the so-called Inception Resnet and MobileNet models to predict fresh biomass and percentage dry matter, enhancing performance using data fusion, transfer learning and multi-task learning. Images were subdivided to augment the dataset, using two different patch sizes, resulting in around 10,000 small patches of size 156 x 156 pixels and around 5,000 large patches of size 240 x 240 pixels. Five-fold cross validation was used in all analysis. Prediction accuracy was compared to older mechanisms, albeit using hyperspectral data collected, with no provision made for lighting, humidity or temperature. Hyperspectral labelled data did not produce accurate results when used to calculate Normalized Difference Vegetation Index (NDVI), or to train a neural network (NN), a 1D Convolutional Neural Network (CNN) or Long Short Term Memory (LSTM) models. Potential reasons for this are discussed, including issues around the use of highly sensitive devices in uncontrolled environments. The most accurate prediction came from a multi-modal hybrid model that concatenated output from an Inception ResNet based model, run on RGB data with ImageNet pre-trained RGB weights, output from a residual network trained on NIR data, and LiDAR height data, before fully connected layers, using the small patch dataset with a minimum validation MAPE of 28.23% for fresh biomass and 11.43% for dryness. However, a very similar prediction accuracy resulted from a model that omitted NIR data, thus requiring fewer sensors and training resources, making it more sustainable. Although NIR and temperature differential data were collected and used for analysis, neither improved prediction accuracy, with the Inception ResNet model’s minimum validation MAPE rising to 39.42% when NIR data was added. When both NIR data and temperature differential were added to a multi-task learning Inception ResNet model, it yielded a minimum validation MAPE of 33.32%. As more labelled data are collected, the models can be further trained, enabling sensors on mowers to collect data and give timely trait information to farmers. This technology is also transferable to other crops. Overall, this work should provide a valuable contribution to the smart agriculture research space
    corecore