15 research outputs found

    UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands

    Get PDF
    This is the final version of the article. Available from MDPI via the DOI in this record.The monitoring of invasive grasses and vegetation in remote areas is challenging, costly, and on the ground sometimes dangerous. Satellite and manned aircraft surveys can assist but their use may be limited due to the ground sampling resolution or cloud cover. Straightforward and accurate surveillance methods are needed to quantify rates of grass invasion, offer appropriate vegetation tracking reports, and apply optimal control methods. This paper presents a pipeline process to detect and generate a pixel-wise segmentation of invasive grasses, using buffel grass (Cenchrus ciliaris) and spinifex (Triodia sp.) as examples. The process integrates unmanned aerial vehicles (UAVs) also commonly known as drones, high-resolution red, green, blue colour model (RGB) cameras, and a data processing approach based on machine learning algorithms. The methods are illustrated with data acquired in Cape Range National Park, Western Australia (WA), Australia, orthorectified in Agisoft Photoscan Pro, and processed in Python programming language, scikit-learn, and eXtreme Gradient Boosting (XGBoost) libraries. In total, 342,626 samples were extracted from the obtained data set and labelled into six classes. Segmentation results provided an individual detection rate of 97% for buffel grass and 96% for spinifex, with a global multiclass pixel-wise detection rate of 97%. Obtained results were robust against illumination changes, object rotation, occlusion, background cluttering, and floral density variation.This work was funded by the Plant Biosecurity Cooperative Research Centre (PBCRC) 2164 project, the Agriculture Victoria Research and the Queensland University of Technology (QUT). The authors would like to acknowledge Derek Sandow andWA Parks andWildlife Service for the logistic support and permits to access the survey areas at Cape Range National Park. The authors would also like to acknowledge Eduard Puig-Garcia for his contributions in co-planning the experimentation phase. The authors gratefully acknowledge the support of the QUT Research Engineering Facility (REF) Operations Team (Dirk Lessner, Dean Gilligan, Gavin Broadbent and Dmitry Bratanov), who operated the DJI S800 EVO UAV and image sensors, and performed ground referencing. We thank Gavin Broadbent for the design, manufacturing, and tuning of a two-axis gimbal for the camera. We also acknowledge the High-Performance Computing and Research Support Group at QUT, for the computational resources and services used in this work

    A Sense of Scale: Mapping Exotic Annual Grasses with Satellite Imagery Across a Landscape and Quantifying Their Biomass at a Plot Level with Structure-from-Motion in a Semi-Arid Ecosystem

    Get PDF
    The native vegetation communities in the sagebrush steppe, a semi-arid ecosystem type, are under threat from exotic annual grasses. Exotic annual grasses increase fire severity and frequency, decrease biodiversity, and reduce soil carbon storage amongst other ecosystem services. The invasion of exotic annual grasses is causing detrimental impacts to land use by eliminating forage for livestock and creating a huge economic cost from fire control and post-fire restoration. To combat invasion, land managers need to know what exotic annual grasses are present, where they are invading, and estimates of their biomass. Mapping exotic annual grasses is challenging because many areas in the sagebrush steppe are difficult to access; yet field measurements are the main method to identify and quantify their existence. In this study, we address this challenge by exploring the use of both landscape-scale and plot-scale observations with remote sensing. First, we use satellite imagery to map where exotic annual grasses are invading and identify the native species which are being encroached upon. Second, we investigate the use of fine-scale imagery for non-destructive measurements of biomass of exotic annual grasses. Understanding the location of exotic annual grasses is important for restoration efforts, e.g. large swath (~100m) herbicide spraying. Restoration efforts are expensive and often ineffective in areas already dominated by exotic annual grasses. Early detection of exotic annual grasses in sagebrush and native grasses communities will increase the chances of effective ecosystem restoration. We used Sentinel-2 satellite imagery in Google Earth Engine, a cloud computing platform, to train a random forest (RF) machine learning algorithm to map vegetation in ~150,000 acres in the sagebrush steppe in southeast Idaho. The result is a classification map of vegetation (overall accuracy of 72%) and a map of percent cover of annual grass (R2 = 0.58). The combination of these two maps will allow land managers to target areas of restoration and make informed decisions about where to allow grazing. In addition to knowing what exotic annual grasses exist and their percent cover, detailed information about their biomass is important for understanding fuel loads and forage quality. Structure from Motion (SfM) is a photogrammetry technique that uses digital images to develop 3-dimensional point clouds that can be transformed into volumetric measurements of biomass. The SfM technique has the potential to quantify biomass estimates across multiple plots while minimizing field work. We developed allometric equations relating SfM-derived volume (m3) to biomass (g/m2) for a study area in southeast Oregon. The resulting equation showed a positive relationship (R2 = 0.51) between the log transformed SfM-derived volume and log transformed biomass when litter was removed. This relationship shows promise in being upscaled to larger surveys using aerial platforms. This method can reduce the need for destructively harvesting biomass, and thus allow field work to cover a greater spatial extent. Ultimately, increasing spatial coverage for biomass will improve accuracy in quantifying fuel loads and carbon storage, providing insights to how these exotic plants are altering ecosystem services

    Mapping invasive plants using RPAS and remote sensing

    Get PDF
    The ability to accurately detect invasive plant species is integral in their management, treatment, and removal. This study focused on developing and evaluating RPAS-based methods for detecting invasive plant species using image analysis and machine learning and was conducted in two stages. First, supervised classification to identify the invasive yellow flag iris (Iris pseudacorus) was performed in a wetland environment using high-resolution raw imagery captured with an uncalibrated visible-light camera. Colour-thresholding, template matching, and de-speckling prior to training a random forest classifier are explored in terms of their benefits towards improving the resulting classification of YFI plants within each image. The impacts of feature selection prior to training are also explored. Results from this work demonstrate the importance of performing image processing and it was found that the application of colour thresholding and de-speckling prior to classification by a random forest classifier trained to identify patches of YFI using spectral and textural features provided the best results. Second, orthomosaicks generated from multispectral imagery were used to detect and predict the relative abundance of spotted knapweed (Centaurea maculosa) in a heterogeneous grassland ecosystem. Relative abundance was categorized in qualitative classes and validated through field-based plant species inventories. The method developed for this work, termed metapixel-based image analysis, segments orthomosaicks into a grid of metapixels for which grey-level co-occurrence matrix (GLCM)-based statistics can be computed as descriptive features. Using RPAS-acquired multispectral imagery and plant species inventories performed on 1m2 quadrats, a random forest classifier was trained to predict the qualitative degree of spotted knapweed ground-cover within each metapixel. Analysis of the performance of metapixel-based image analysis in this study suggests that feature optimization and the use of GLCM-based texture features are of critical importance for achieving an accurate classification. Additional work to further test the generalizability of the detection methods developed is recommended prior to deployment across multiple sites.remote sensingremotely piloted aircraft systemsRPASinvasive plant speciesmachine learnin

    WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming

    Full text link
    We present a novel weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.Comment: 25 pages, 14 figures, MDPI Remote Sensin

    Use of UAV Imagery and Nutrient Analyses for Estimation of the Spatial and Temporal Contributions of Cattle Dung to Nutrient Cycling in Grazed Ecosystems

    Get PDF
    Nutrient inputs from cattle dung are crucial drivers of nutrient cycling processes in grazed ecosystems. These inputs are important both spatially and temporally and are affected by variables such as grazing strategy, water location, and the nutritional profile of forage being grazed. Past research has attempted to map dung deposition patterns in order to more accurately estimate nutrient input, but the large spatial extent of a typical pasture and the tedious nature of identifying and mapping individual dung pats has prohibited the development of a time- and cost-effective methodology. The first objective of this research was to develop and validate a new method for the detection and mapping of dung using an unmanned aerial vehicle (UAV) and multispectral imagery. The second objective was to quantify change over time in water-extractable organic carbon (WEOC), water-extractable phosphorus (WEP), and water-extractable nitrogen (WEN) in naturally-deposited dung that ranged from one to twenty-four days old. In addition, pre-analysis dung storage methods (refrigeration vs. freezing) were evaluated for their impact on laboratory analyses results. Multispectral images of pastures were classified using object-based image analysis. Post-classification accuracy assessment showed an overall accuracy of 82.6% and a Kappa coefficient of 0.71. Most classification errors were attributable to the misclassification of dung as vegetation, especially in spectrally heterogeneous areas such as trampled vegetation. Limitations to the implementation of this method for identifying and mapping cattle dung at large scales include the high degree of geospatial accuracy required for successful classification, and the need for additional method validation in diverse grassland environments. Dung WEN concentrations ranged from 1.20 g kg-1 at three days of age, to a low of 0.252 g kg-1 at 24 days. The highest WEOC values were in day-old dung, 19.25 g kg-1, and lowest in 14-day-old dung, 2.86 g kg-1. WEOC and WEN both followed exponential decay patterns of loss as dung aged. WEP was lowest at 1.28 g kg-1 (day one) and highest at 12 days (3.24 g kg-1), and dry matter and WEOC concentration were stronger determinants of WEP than age alone. Freezing consistently increased WEN and WEOC concentrations over fresh values, but WEP was inconsistent across ages in its response. This research provides new insight into dung nutrient dynamics and presents a novel method for studying them across large spatial and temporal scales. Advisors: Martha Mamo and Jerry Volesk

    Semantic Segmentation based deep learning approaches for weed detection

    Get PDF
    Global increase in herbicide use to control weeds has led to issues such as evolution of herbicide-resistant weeds, off-target herbicide movement, etc. Precision agriculture advocates Site Specific Weed Management (SSWM) application to achieve precise and right amount of herbicide spray and reduce off-target herbicide movement. Recent advancements in Deep Learning (DL) have opened possibilities for adaptive and accurate weed recognitions for field based SSWM applications with traditional and emerging spraying equipment; however, challenges exist in identifying the DL model structure and train the model appropriately for accurate and rapid model applications over varying crop/weed growth stages and environment. In our study, an encoder-decoder based DL architecture was proposed that performs pixel-wise Semantic Segmentation (SS) classifications of crop, soil, and weed patches in the fields. The objective of this study was to develop a robust weed detection algorithm using DL techniques that can accurately and reliably locate weed infestations in low altitude Unmanned Aerial Vehicle (UAV) imagery with acceptable application speed. Two different encoder-decoder based SS models of LinkNet and UNet were developed using transfer learning techniques. We performed various measures such as backpropagation optimization and refining of the dataset used for training to address the class-imbalance problem which is a common issue in developing weed detection models. It was found that LinkNet model with ResNet18 as the encoder section and use of ‘Focal loss’ loss function was able to achieve the highest mean and class-wise Intersection over Union scores for different class categories while performing predictions on unseen dataset. The developed state-of-art model did not require a large amount of data during training and the techniques used to develop the model in our study provides a propitious opportunity that performs better than the existing SS based weed detections models. The proposed model integrates a futuristic approach to develop a model that could be used for weed detection on aerial imagery from UAV and perform real-time SSWM applications Advisor: Yeyin Sh

    Sensing Mountains

    Get PDF
    Sensing mountains by close-range and remote techniques is a challenging task. The 4th edition of the international Innsbruck Summer School of Alpine Research 2022 – Close-range Sensing Techniques in Alpine Terrain brings together early career and experienced scientists from technical-, geo- and environmental-related research fields. The interdisciplinary setting of the summer school creates a creative space for exchanging and learning new concepts and solutions for mapping, monitoring and quantifying mountain environments under ongoing conditions of change

    The rapid rise of next-generation natural history

    Get PDF
    Many ecologists have lamented the demise of natural history and have attributed this decline to a misguided view that natural history is outdated and unscientific. Although there is a perception that the focus in ecology and conservation have shifted away from descriptive natural history research and training toward hypothetico-deductive research, we argue that natural history has entered a new phase that we call “next-generation natural history.” This renaissance of natural history is characterized by technological and statistical advances that aid in collecting detailed observations systematically over broad spatial and temporal extents. The technological advances that have increased exponentially in the last decade include electronic sensors such as camera-traps and acoustic recorders, aircraft- and satellite-based remote sensing, animal-borne biologgers, genetics and genomics methods, and community science programs. Advances in statistics and computation have aided in analyzing a growing quantity of observations to reveal patterns in nature. These robust next-generation natural history datasets have transformed the anecdotal perception of natural history observations into systematically collected observations that collectively constitute the foundation for hypothetico-deductive research and can be leveraged and applied to conservation and management. These advances are encouraging scientists to conduct and embrace detailed descriptions of nature that remain a critically important component of the scientific endeavor. Finally, these next-generation natural history observations are engaging scientists and non-scientists alike with new documentations of the wonders of nature. Thus, we celebrate next-generation natural history for encouraging people to experience nature directly

    The use of machine learning algorithms to assess the impacts of droughts on commercial forests in KwaZulu-Natal, South Africa.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Pietermaritzburg.Droughts are a non-selective natural disaster in that their occurrence can be in both high and low precipitation areas. However, this study acknowledged that droughts are more recurrent and a regular feature in arid and semi-arid climates such as that of Southern Africa. Some of these countries rely strongly on commercial forests for their gross domestic product (GDP), especially South Africa and Mozambique which means droughts pose a significant threat to their economy and the society that depends on this economy. The risks associated with droughts have consequently created an increased demand for an efficient method of analysing and investigating droughts and the impacts they impose on forest vegetation. Therefore, this study aimed to examine the effects of droughts on all commercial forests within the province of KwaZulu-Natal (KZN) at a catchment and provincial scale by employing Kernel Support Vector Machine (Kernel –SVM), Rotation Forests (RTF) and Extreme Gradient Boosting (XGBoost) algorithms. These were based on Landsat and MODIS derived vegetation and conditional drought indices. The main aim of this study was achieved by the following objectives: (i) to improve methods for classifying droughts; (ii) to achieve medium spatial resolution drought analysis using Landsat sensors; (iii) to determine the accuracy of machine learning algorithms (MLAs) when employed on remote sensing data and (iv) to improve the usability of conditional drought indices and vegetation indices. The results obtained there-after demonstrated that the objectives of this study were met. With the MLAs performing better when using conditional drought indices compared to vegetation indices, therefore, highlighting drawbacks already associated with vegetation indices. Where at the catchment scale, Kernel – support vector machine (SVM) produced an overall accuracy (OA) of 94.44% when based on conditional drought indices compared to 81.48% when based on vegetation indices. On the same scale, Rotation forests (RTF) produced 96.30% and 81.84% when using conditional drought indices and vegetation indices, respectively. At a provincial scale, RTF produced an OA of 76.6% and 70.7% when using conditional drought indices and vegetation indices respectively. This was compared to extreme gradient boosting (XGBoost) which produced an OA of 81.9% and 69.3% when using conditional drought indices and vegetation indices respectively. These results also indicate that it is possible to analyse droughts at provincial and catchment scale. Although the results presented in this study were promising, more research is still required to improve the applicability of MLAs in drought analysis.Dedication is listed on page iii
    corecore