454 research outputs found

    Unmanned Aerial Vehicles (UAVs) in environmental biology: A Review

    Get PDF
    Acquiring information about the environment is a key step during each study in the field of environmental biology at different levels, from an individual species to community and biome. However, obtaining information about the environment is frequently difficult because of, for example, the phenological timing, spatial distribution of a species or limited accessibility of a particular area for the field survey. Moreover, remote sensing technology, which enables the observation of the Earth’s surface and is currently very common in environmental research, has many limitations such as insufficient spatial, spectral and temporal resolution and a high cost of data acquisition. Since the 1990s, researchers have been exploring the potential of different types of unmanned aerial vehicles (UAVs) for monitoring Earth’s surface. The present study reviews recent scientific literature dealing with the use of UAV in environmental biology. Amongst numerous papers, short communications and conference abstracts, we selected 110 original studies of how UAVs can be used in environmental biology and which organisms can be studied in this manner. Most of these studies concerned the use of UAV to measure the vegetation parameters such as crown height, volume, number of individuals (14 studies) and quantification of the spatio-temporal dynamics of vegetation changes (12 studies). UAVs were also frequently applied to count birds and mammals, especially those living in the water. Generally, the analytical part of the present study was divided into following sections: (1) detecting, assessing and predicting threats on vegetation, (2) measuring the biophysical parameters of vegetation, (3) quantifying the dynamics of changes in plants and habitats and (4) population and behaviour studies of animals. At the end, we also synthesised all the information showing, amongst others, the advances in environmental biology because of UAV application. Considering that 33% of studies found and included in this review were published in 2017 and 2018, it is expected that the number and variety of applications of UAVs in environmental biology will increase in the future

    Evaluating techniques for mapping island vegetation from unmanned aerial vehicle (UAV) images: Pixel classification, visual interpretation and machine learning approaches

    Get PDF
    We evaluate three approaches to mapping vegetation using images collected by an unmanned aerial vehicle (UAV) to monitor rehabilitation activities in the Five Islands Nature Reserve, Wollongong (Australia). Between April 2017 and July 2018, four aerial surveys of Big Island were undertaken to map changes to island vegetation following helicopter herbicide sprays to eradicate weeds, including the creeper Coastal Morning Glory (Ipomoea cairica) and Kikuyu Grass (Cenchrus clandestinus). The spraying was followed by a large scale planting campaign to introduce native plants, such as tussocks of Spiny-headed Mat-rush (Lomandra longifolia). Three approaches to mapping vegetation were evaluated, including: (i) a pixel-based image classification algorithm applied to the composite spectral wavebands of the images collected, (ii) manual digitisation of vegetation directly from images based on visual interpretation, and (iii) the application of a machine learning algorithm, LeNet, based on a deep learning convolutional neural network (CNN) for detecting planted Lomandra tussocks. The uncertainty of each approach was assessed via comparison against an independently collected field dataset. Each of the vegetation mapping approaches had a comparable accuracy; for a selected weed management and planting area, the overall accuracies were 82 %, 91 % and 85 % respectively for the pixel based image classification, the visual interpretation / digitisation and the CNN machine learning algorithm. At the scale of the whole island, statistically significant differences in the performance of the three approaches to mapping Lomandra plants were detected via ANOVA. The manual digitisation took a longer time to perform than others. The three approaches resulted in markedly different vegetation maps characterised by different digital data formats, which offered fundamentally different types of information on vegetation character. We draw attention to the need to consider how different digital map products will be used for vegetation management (e.g. monitoring the health individual species or a broader profile of the community). Where individual plants are to be monitored over time, a feature-based approach that represents plants as vector points is appropriate. The CNN approach emerged as a promising technique in this regard as it leveraged spatial information from the UAV images within the architecture of the learning framework by enforcing a local connectivity pattern between neurons of adjacent layers to incorporate the spatial relationships between features that comprised the shape of the Lomandra tussocks detected

    Evaluating techniques for mapping island vegetation from unmanned aerial vehicle (UAV) images: Pixel classification, visual interpretation and machine learning approaches

    Get PDF
    We evaluate three approaches to mapping vegetation using images collected by an unmanned aerial vehicle (UAV) to monitor rehabilitation activities in the Five Islands Nature Reserve, Wollongong (Australia). Between April 2017 and July 2018, four aerial surveys of Big Island were undertaken to map changes to island vegetation following helicopter herbicide sprays to eradicate weeds, including the creeper Coastal Morning Glory (Ipomoea cairica) and Kikuyu Grass (Cenchrus clandestinus). The spraying was followed by a large scale planting campaign to introduce native plants, such as tussocks of Spiny-headed Mat-rush (Lomandra longifolia). Three approaches to mapping vegetation were evaluated, including: (i) a pixel-based image classification algorithm applied to the composite spectral wavebands of the images collected, (ii) manual digitisation of vegetation directly from images based on visual interpretation, and (iii) the application of a machine learning algorithm, LeNet, based on a deep learning convolutional neural network (CNN) for detecting planted Lomandra tussocks. The uncertainty of each approach was assessed via comparison against an independently collected field dataset. Each of the vegetation mapping approaches had a comparable accuracy; for a selected weed management and planting area, the overall accuracies were 82 %, 91 % and 85 % respectively for the pixel based image classification, the visual interpretation / digitisation and the CNN machine learning algorithm. At the scale of the whole island, statistically significant differences in the performance of the three approaches to mapping Lomandra plants were detected via ANOVA. The manual digitisation took a longer time to perform than others. The three approaches resulted in markedly different vegetation maps characterised by different digital data formats, which offered fundamentally different types of information on vegetation character. We draw attention to the need to consider how different digital map products will be used for vegetation management (e.g. monitoring the health individual species or a broader profile of the community). Where individual plants are to be monitored over time, a feature-based approach that represents plants as vector points is appropriate. The CNN approach emerged as a promising technique in this regard as it leveraged spatial information from the UAV images within the architecture of the learning framework by enforcing a local connectivity pattern between neurons of adjacent layers to incorporate the spatial relationships between features that comprised the shape of the Lomandra tussocks detected

    Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping

    Get PDF
    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights

    WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming

    Full text link
    We present a novel weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.Comment: 25 pages, 14 figures, MDPI Remote Sensin

    Mid to Late Season Weed Detection in Soybean Production Fields Using Unmanned Aerial Vehicle and Machine Learning

    Get PDF
    Mid-late season weeds are those that escape the early season herbicide applications and those that emerge late in the season. They might not affect the crop yield, but if uncontrolled, will produce a large number of seeds causing problems in the subsequent years. In this study, high-resolution aerial imagery of mid-season weeds in soybean fields was captured using an unmanned aerial vehicle (UAV) and the performance of two different automated weed detection approaches – patch-based classification and object detection was studied for site-specific weed management. For the patch-based classification approach, several conventional machine learning models on Haralick texture features were compared with the Mobilenet v2 based convolutional neural network (CNN) model for their classification performance. The results showed that the CNN model had the best classification performance for individual patches. Two different image slicing approaches – patches with and without overlap were tested, and it was found that slicing with overlap leads to improved weed detection but with higher inference time. For the object detection approach, two models with different network architectures, namely Faster RCNN and SSD were evaluated and compared. It was found that Faster RCNN had better overall weed detection performance than the SSD with similar inference time. Also, it was found that Faster RCNN had better detection performance and shorter inference time compared to the patch-based CNN with overlapping image slicing. The influence of spatial resolution on weed detection accuracy was investigated by simulating the UAV imagery captured at different altitudes. It was found that Faster RCNN achieves similar performance at a lower spatial resolution. The inference time of Faster RCNN was evaluated using a regular laptop. The results showed the potential of on-farm near real-time weed detection in soybean production fields by capturing UAV imagery with lesser overlap and processing them with a pre-trained deep learning model, such as Faster RCNN, in regular laptops and mobile devices. Advisor: Yeyin Sh

    Mid to Late Season Weed Detection in Soybean Production Fields Using Unmanned Aerial Vehicle and Machine Learning

    Get PDF
    Mid-late season weeds are those that escape the early season herbicide applications and those that emerge late in the season. They might not affect the crop yield, but if uncontrolled, will produce a large number of seeds causing problems in the subsequent years. In this study, high-resolution aerial imagery of mid-season weeds in soybean fields was captured using an unmanned aerial vehicle (UAV) and the performance of two different automated weed detection approaches – patch-based classification and object detection was studied for site-specific weed management. For the patch-based classification approach, several conventional machine learning models on Haralick texture features were compared with the Mobilenet v2 based convolutional neural network (CNN) model for their classification performance. The results showed that the CNN model had the best classification performance for individual patches. Two different image slicing approaches – patches with and without overlap were tested, and it was found that slicing with overlap leads to improved weed detection but with higher inference time. For the object detection approach, two models with different network architectures, namely Faster RCNN and SSD were evaluated and compared. It was found that Faster RCNN had better overall weed detection performance than the SSD with similar inference time. Also, it was found that Faster RCNN had better detection performance and shorter inference time compared to the patch-based CNN with overlapping image slicing. The influence of spatial resolution on weed detection accuracy was investigated by simulating the UAV imagery captured at different altitudes. It was found that Faster RCNN achieves similar performance at a lower spatial resolution. The inference time of Faster RCNN was evaluated using a regular laptop. The results showed the potential of on-farm near real-time weed detection in soybean production fields by capturing UAV imagery with lesser overlap and processing them with a pre-trained deep learning model, such as Faster RCNN, in regular laptops and mobile devices. Advisor: Yeyin Sh
    • …
    corecore