38 research outputs found
Identification of Brush Species and Herbicide Effect Assessment in Southern Texas Using an Unoccupied Aerial System (UAS)
Cultivation and grazing since the mid-nineteenth century in Texas has caused dramatic changes in grassland vegetation. Among these changes is the encroachment of native and introduced brush species. The distribution and quantity of brush can affect livestock production and water holding capacity of soil. Still, at the same time, brush can improve carbon sequestration and enhance agritourism and real estate value. The accurate identification of brush species and their distribution over large land tracts are important in developing brush management plans which may include herbicide application decisions. Near-real-time imaging and analyses of brush using an Unoccupied Aerial System (UAS) is a powerful tool to achieve such tasks. The use of multispectral imagery collected by a UAS to estimate the efficacy of herbicide treatment on noxious brush has not been evaluated previously. There has been no previous comparison of band combinations and pixel- and object-based methods to determine the best methodology for discrimination and classification of noxious brush species with Random Forest (RF) classification. In this study, two rangelands in southern Texas with encroachment of huisache (Vachellia farnesianna [L.] Wight & Arn.) and honey mesquite (Prosopis glandulosa Torr. var. glandulosa) were studied. Two study sites were flown with an eBee X fixed-wing to collect UAS images with four bands (Green, Red, Red-Edge, and Near-infrared) and ground truth data points pre- and post-herbicide application to study the herbicide effect on brush. Post-herbicide data were collected one year after herbicide application. Pixel-based and object-based RF classifications were used to identify brush in orthomosaic images generated from UAS images. The classification had an overall accuracy in the range 83–96%, and object-based classification had better results than pixel-based classification since object-based classification had the highest overall accuracy in both sites at 96%. The UAS image was useful for assessing herbicide efficacy by calculating canopy change after herbicide treatment. Different effects of herbicides and application rates on brush defoliation were measured by comparing canopy change in herbicide treatment zones. UAS-derived multispectral imagery can be used to identify brush species in rangelands and aid in objectively assessing the herbicide effect on brush encroachment
Recommended from our members
FloodSim: Flood simulation and visualization framework using position-based fluids
Flood modeling and analysis has been a vital research area to reduce damages caused by flooding and to make urban environments resilient against such occurrences. This work focuses on building a framework to simulate and visualize flooding in 3D using position-based fluids for real-time flood spread visualization and analysis. The framework incorporates geographical information and takes several parameters in the form of friction coefficients and storm drain information, and then uses mechanics such as precipitation and soil absorption for simulation. The preliminary results of the river flooding test case were satisfactory, as the flood extent was reproduced in 220 s with a difference of 7%. Consequently, the framework could be a useful tool for practitioners who have information about the study area and would like to visualize flooding using a particle-based approach for real-time particle tracking and flood path analysis, incorporating precipitation into their models.Flood modeling and analysis has been a vital research area to reduce damages caused by flooding and to make urban environments resilient against such occurrences. This work focuses on building a framework to simulate and visualize flooding in 3D using position-based fluids for real-time flood spread visualization and analysis. The framework incorporates geographical information and takes several parameters in the form of friction coefficients and storm drain information, and then uses mechanics such as precipitation and soil absorption for simulation. The preliminary results of the river flooding test case were satisfactory, as the flood extent was reproduced in 220 s with a difference of 7%. Consequently, the framework could be a useful tool for practitioners who have information about the study area and would like to visualize flooding using a particle-based approach for real-time particle tracking and flood path analysis, incorporating precipitation into their models
Recommended from our members
Review and Evaluation of Deep Learning Architectures for Efficient Land Cover Mapping with UAS Hyper-Spatial Imagery: A Case Study Over a Wetland
Deep learning has already been proved as a powerful state-of-the-art technique for many image understanding tasks in computer vision and other applications including remote sensing (RS) image analysis. Unmanned aircraft systems (UASs) offer a viable and economical alternative to a conventional sensor and platform for acquiring high spatial and high temporal resolution data with high operational flexibility. Coastal wetlands are among some of the most challenging and complex ecosystems for land cover prediction and mapping tasks because land cover targets often show high intra-class and low inter-class variances. In recent years, several deep convolutional neural network (CNN) architectures have been proposed for pixel-wise image labeling, commonly called semantic image segmentation. In this paper, some of the more recent deep CNN architectures proposed for semantic image segmentation are reviewed, and each model’s training efficiency and classification performance are evaluated by training it on a limited labeled image set. Training samples are provided using the hyper-spatial resolution UAS imagery over a wetland area and the required ground truth images are prepared by manual image labeling. Experimental results demonstrate that deep CNNs have a great potential for accurate land cover prediction task using UAS hyper-spatial resolution images. Some simple deep learning architectures perform comparable or even better than complex and very deep architectures with remarkably fewer training epochs. This performance is especially valuable when limited training samples are available, which is a common case in most RS applications.Deep learning has already been proved as a powerful state-of-the-art technique for many image understanding tasks in computer vision and other applications including remote sensing (RS) image analysis. Unmanned aircraft systems (UASs) offer a viable and economical alternative to a conventional sensor and platform for acquiring high spatial and high temporal resolution data with high operational flexibility. Coastal wetlands are among some of the most challenging and complex ecosystems for land cover prediction and mapping tasks because land cover targets often show high intra-class and low inter-class variances. In recent years, several deep convolutional neural network (CNN) architectures have been proposed for pixel-wise image labeling, commonly called semantic image segmentation. In this paper, some of the more recent deep CNN architectures proposed for semantic image segmentation are reviewed, and each model’s training efficiency and classification performance are evaluated by training it on a limited labeled image set. Training samples are provided using the hyper-spatial resolution UAS imagery over a wetland area and the required ground truth images are prepared by manual image labeling. Experimental results demonstrate that deep CNNs have a great potential for accurate land cover prediction task using UAS hyper-spatial resolution images. Some simple deep learning architectures perform comparable or even better than complex and very deep architectures with remarkably fewer training epochs. This performance is especially valuable when limited training samples are available, which is a common case in most RS applications
Recommended from our members
Deep learning pipeline to generate partial 3D structures of unconstrained image sequence
Structure from Motion (SfM) is a technique to recover a 3D scene or an object from a set of images. The images are collected from different angles of the object or the scene then the SfM software systems find matching 2D points between the images. The software triangulates the 3D position of the matched points. SfM is used in many applications such as virtual and augmented realities to enable virtual tours as well as scientific applications to scan and study various specimens. Close-range photogrammetry is a low-cost, simple method to attain high-quality 3D object reconstruction. However, software systems need a static scene or a controlled setting (usually a turntable setup with a blank backdrop), which can be a constraining component for scanning an object or a scene. Our research introduces a preprocessing pipeline based on deep learning to mitigate the turntable constraints. The pipeline uses detection and tracking techniques to isolate the different objects from the scene before feeding the imagery to a SfM software system. We assess multiple SfM software systems with and without the pipeline. The results show the pipeline line improves the 3D reconstruction quality and even recover the 3D structure of an object that cannot be reconstructed otherwise
Use of Fire-Extinguishing Balls for a Conceptual System of Drone-Assisted Wildfire Fighting
This paper examines the potential use of fire extinguishing balls as part of a proposed system, where drone and remote-sensing technologies are utilized cooperatively as a supplement to traditional firefighting methods. The proposed system consists of (1) scouting unmanned aircraft system (UAS) to detect spot fires and monitor the risk of wildfire approaching a building, fence, and/or firefighting crew via remote sensing, (2) communication UAS to establish and extend the communication channel between scouting UAS and fire-fighting UAS, and (3) a fire-fighting UAS autonomously traveling to the waypoints to drop fire extinguishing balls (environmental friendly, heat activated suppressants). This concept is under development through a transdisciplinary multi-institutional project. The scope of this paper encloses general illustration of this design, and the experiments conducted so far to evaluate fire extinguishing balls. The results of the experiments show that smaller size fire extinguishing balls available in the global marketplace attached to drones might not be effective in aiding in building fires (unless there are open windows in the buildings already). On the contrary, results show that even the smaller size fire extinguishing balls might be effective in extinguishing short grass fires (around 0.5 kg size ball extinguished a circle of 1-meter of short grass). This finding guided the authors towards wildfire fighting rather than building fires. The paper also demonstrates building of heavy payload drones (around 15 kg payload), and the progress of development of an apparatus carrying fire-extinguishing balls attachable to drones
Evaluating the Performance of sUAS Photogrammetry with PPK Positioning for Infrastructure Mapping
Traditional acquisition methods for generating digital surface models (DSMs) of infrastructure are either low resolution and slow (total station-based methods) or expensive (LiDAR). By contrast, photogrammetric methods have recently received attention due to their ability to generate dense 3D models quickly for low cost. However, existing frameworks often utilize many manually measured control points, require a permanent RTK/PPK reference station, or yield a reconstruction accuracy too poor to be useful in many applications. In addition, the causes of inaccuracy in photogrammetric imagery are complex and sometimes not well understood. In this study, a small unmanned aerial system (sUAS) was used to rapidly image a relatively even, 1 ha ground surface. Model accuracy was investigated to determine the importance of ground control point (GCP) count and differential GNSS base station type. Results generally showed the best performance for tests using five or more GCPs or when a Continuously Operating Reference Station (CORS) was used, with vertical root mean square errors of 0.026 and 0.027 m in these cases. However, accuracy outputs generally met comparable published results in the literature, demonstrating the viability of analyses relying solely on a temporary local base with a one hour dwell time and no GCPs
Recommended from our members
Characterizing canopy height with UAS structure from-motion photogrammetry—results analysis of a maize field trial with respect to multiple factors
Unmanned aircraft system (UAS) measured canopy height has frequently been determined by means of digital surface models (DSMs) derived from structure-from-motion (SfM) photogrammetry without examining specific metrics in detail. Multiple geospatial factors to be considered for the purpose of generating an accurate height estimation were characterized and summarized in this letter using UAS-SfM photogrammetry over an experimental maize field trial. This particular study demonstrated that: 1) the 99th percentile height in a 25 cm-wide crop row polygon provided the best canopy height estimation accuracy; 2) the height difference between using a rasterized DSM and direct three-dimensional (3D) point cloud was minor yet steadily increased when the DSM resolution value grew; and 3) the accuracy of the DSM-based canopy height estimation dropped significantly after the DSM resolution became coarser than 12 cm. Results also suggested that the cost function introduced in this letter has the potential to be used for optimizing the height estimation accuracy of various crop types given ground truth.Unmanned aircraft system (UAS) measured canopy height has frequently been determined by means of digital surface models (DSMs) derived from structure-from-motion (SfM) photogrammetry without examining specific metrics in detail. Multiple geospatial factors to be considered for the purpose of generating an accurate height estimation were characterized and summarized in this letter using UAS-SfM photogrammetry over an experimental maize field trial. This particular study demonstrated that: 1) the 99th percentile height in a 25 cm-wide crop row polygon provided the best canopy height estimation accuracy; 2) the height difference between using a rasterized DSM and direct three-dimensional (3D) point cloud was minor yet steadily increased when the DSM resolution value grew; and 3) the accuracy of the DSM-based canopy height estimation dropped significantly after the DSM resolution became coarser than 12 cm. Results also suggested that the cost function introduced in this letter has the potential to be used for optimizing the height estimation accuracy of various crop types given ground truth
Deep Learning-Based Single Image Super-Resolution: An Investigation for Dense Scene Reconstruction with UAS Photogrammetry
The deep convolutional neural network (DCNN) has recently been applied to the highly challenging and ill-posed problem of single image super-resolution (SISR), which aims to predict high-resolution (HR) images from their corresponding low-resolution (LR) images. In many remote sensing (RS) applications, spatial resolution of the aerial or satellite imagery has a great impact on the accuracy and reliability of information extracted from the images. In this study, the potential of a DCNN-based SISR model, called enhanced super-resolution generative adversarial network (ESRGAN), to predict the spatial information degraded or lost in a hyper-spatial resolution unmanned aircraft system (UAS) RGB image set is investigated. ESRGAN model is trained over a limited number of original HR (50 out of 450 total images) and virtually-generated LR UAS images by downsampling the original HR images using a bicubic kernel with a factor × 4 . Quantitative and qualitative assessments of super-resolved images using standard image quality measures (IQMs) confirm that the DCNN-based SISR approach can be successfully applied on LR UAS imagery for spatial resolution enhancement. The performance of DCNN-based SISR approach for the UAS image set closely approximates performances reported on standard SISR image sets with mean peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index values of around 28 dB and 0.85 dB, respectively. Furthermore, by exploiting the rigorous Structure-from-Motion (SfM) photogrammetry procedure, an accurate task-based IQM for evaluating the quality of the super-resolved images is carried out. Results verify that the interior and exterior imaging geometry, which are extremely important for extracting highly accurate spatial information from UAS imagery in photogrammetric applications, can be accurately retrieved from a super-resolved image set. The number of corresponding keypoints and dense points generated from the SfM photogrammetry process are about 6 and 17 times more than those extracted from the corresponding LR image set, respectively
Recommended from our members
Simulation and characterization of wind impacts on sUAS flight performance for crash scene reconstruction
Small unmanned aircraft systems (sUASs) have emerged as promising platforms for the purpose of crash scene reconstruction through structure-from-motion (SfM) photogrammetry. However, auto crashes tend to occur under adverse weather conditions that usually pose increased risks of sUAS operation in the sky. Wind is a typical environmental factor that can cause adverse weather, and sUAS responses to various wind conditions have been understudied in the past. To bridge this gap, commercial and open source sUAS flight simulation software is employed in this study to analyze the impacts of wind speed, direction, and turbulence on the ability of sUAS to track the pre-planned path and endurance of the flight mission. This simulation uses typical flight capabilities of quadcopter sUAS platforms that have been increasingly used for traffic incident management. Incremental increases in wind speed, direction, and turbulence are conducted. Average 3D error, standard deviation, battery use, and flight time are used as statistical metrics to characterize the wind impacts on flight stability and endurance. Both statistical and visual analytics are performed. Simulation results suggest operating the simulated quadcopter type when wind speed is less than 11 m/s under light to moderate turbulence levels for optimal flight performance in crash scene reconstruction missions, measured in terms of positional accuracy, required flight time, and battery use. Major lessons learned for real-world quadcopter sUAS flight design in windy conditions for crash scene mapping are also documented.Small unmanned aircraft systems (sUASs) have emerged as promising platforms for the purpose of crash scene reconstruction through structure-from-motion (SfM) photogrammetry. However, auto crashes tend to occur under adverse weather conditions that usually pose increased risks of sUAS operation in the sky. Wind is a typical environmental factor that can cause adverse weather, and sUAS responses to various wind conditions have been understudied in the past. To bridge this gap, commercial and open source sUAS flight simulation software is employed in this study to analyze the impacts of wind speed, direction, and turbulence on the ability of sUAS to track the pre-planned path and endurance of the flight mission. This simulation uses typical flight capabilities of quadcopter sUAS platforms that have been increasingly used for traffic incident management. Incremental increases in wind speed, direction, and turbulence are conducted. Average 3D error, standard deviation, battery use, and flight time are used as statistical metrics to characterize the wind impacts on flight stability and endurance. Both statistical and visual analytics are performed. Simulation results suggest operating the simulated quadcopter type when wind speed is less than 11 m/s under light to moderate turbulence levels for optimal flight performance in crash scene reconstruction missions, measured in terms of positional accuracy, required flight time, and battery use. Major lessons learned for real-world quadcopter sUAS flight design in windy conditions for crash scene mapping are also documented
Unsupervised Clustering Method for Complexity Reduction of Terrestrial Lidar Data in Marshes
Accurate characterization of marsh elevation and landcover evolution is important for coastal management and conservation. This research proposes a novel unsupervised clustering method specifically developed for segmenting dense terrestrial laser scanning (TLS) data in coastal marsh environments. The framework implements unsupervised clustering with the well-known K-means algorithm by applying an optimization to determine the “k” clusters. The fundamental idea behind this novel framework is the application of multi-scale voxel representation of 3D space to create a set of features that characterizes the local complexity and geometry of the terrain. A combination of point- and voxel-generated features are utilized to segment 3D point clouds into homogenous groups in order to study surface changes and vegetation cover. Results suggest that the combination of point and voxel features represent the dataset well. The developed method compresses millions of 3D points representing the complex marsh environment into eight distinct clusters representing different landcover: tidal flat, mangrove, low marsh to high marsh, upland, and power lines. A quantitative assessment of the automated delineation of the tidal flat areas shows acceptable results considering the proposed method is unsupervised with no training data. Clustering results based on K-means are also compared to results based on the Self Organizing Map (SOM) clustering algorithm. Results demonstrate that the developed multi-scale voxelization approach and representative feature set are transferrable to other clustering algorithms, thereby providing an unsupervised framework for intelligent scene segmentation of TLS point cloud data in marshes