38 research outputs found

    Identification of Brush Species and Herbicide Effect Assessment in Southern Texas Using an Unoccupied Aerial System (UAS)

    Get PDF
    Cultivation and grazing since the mid-nineteenth century in Texas has caused dramatic changes in grassland vegetation. Among these changes is the encroachment of native and introduced brush species. The distribution and quantity of brush can affect livestock production and water holding capacity of soil. Still, at the same time, brush can improve carbon sequestration and enhance agritourism and real estate value. The accurate identification of brush species and their distribution over large land tracts are important in developing brush management plans which may include herbicide application decisions. Near-real-time imaging and analyses of brush using an Unoccupied Aerial System (UAS) is a powerful tool to achieve such tasks. The use of multispectral imagery collected by a UAS to estimate the efficacy of herbicide treatment on noxious brush has not been evaluated previously. There has been no previous comparison of band combinations and pixel- and object-based methods to determine the best methodology for discrimination and classification of noxious brush species with Random Forest (RF) classification. In this study, two rangelands in southern Texas with encroachment of huisache (Vachellia farnesianna [L.] Wight & Arn.) and honey mesquite (Prosopis glandulosa Torr. var. glandulosa) were studied. Two study sites were flown with an eBee X fixed-wing to collect UAS images with four bands (Green, Red, Red-Edge, and Near-infrared) and ground truth data points pre- and post-herbicide application to study the herbicide effect on brush. Post-herbicide data were collected one year after herbicide application. Pixel-based and object-based RF classifications were used to identify brush in orthomosaic images generated from UAS images. The classification had an overall accuracy in the range 83–96%, and object-based classification had better results than pixel-based classification since object-based classification had the highest overall accuracy in both sites at 96%. The UAS image was useful for assessing herbicide efficacy by calculating canopy change after herbicide treatment. Different effects of herbicides and application rates on brush defoliation were measured by comparing canopy change in herbicide treatment zones. UAS-derived multispectral imagery can be used to identify brush species in rangelands and aid in objectively assessing the herbicide effect on brush encroachment

    Use of Fire-Extinguishing Balls for a Conceptual System of Drone-Assisted Wildfire Fighting

    No full text
    This paper examines the potential use of fire extinguishing balls as part of a proposed system, where drone and remote-sensing technologies are utilized cooperatively as a supplement to traditional firefighting methods. The proposed system consists of (1) scouting unmanned aircraft system (UAS) to detect spot fires and monitor the risk of wildfire approaching a building, fence, and/or firefighting crew via remote sensing, (2) communication UAS to establish and extend the communication channel between scouting UAS and fire-fighting UAS, and (3) a fire-fighting UAS autonomously traveling to the waypoints to drop fire extinguishing balls (environmental friendly, heat activated suppressants). This concept is under development through a transdisciplinary multi-institutional project. The scope of this paper encloses general illustration of this design, and the experiments conducted so far to evaluate fire extinguishing balls. The results of the experiments show that smaller size fire extinguishing balls available in the global marketplace attached to drones might not be effective in aiding in building fires (unless there are open windows in the buildings already). On the contrary, results show that even the smaller size fire extinguishing balls might be effective in extinguishing short grass fires (around 0.5 kg size ball extinguished a circle of 1-meter of short grass). This finding guided the authors towards wildfire fighting rather than building fires. The paper also demonstrates building of heavy payload drones (around 15 kg payload), and the progress of development of an apparatus carrying fire-extinguishing balls attachable to drones

    Evaluating the Performance of sUAS Photogrammetry with PPK Positioning for Infrastructure Mapping

    No full text
    Traditional acquisition methods for generating digital surface models (DSMs) of infrastructure are either low resolution and slow (total station-based methods) or expensive (LiDAR). By contrast, photogrammetric methods have recently received attention due to their ability to generate dense 3D models quickly for low cost. However, existing frameworks often utilize many manually measured control points, require a permanent RTK/PPK reference station, or yield a reconstruction accuracy too poor to be useful in many applications. In addition, the causes of inaccuracy in photogrammetric imagery are complex and sometimes not well understood. In this study, a small unmanned aerial system (sUAS) was used to rapidly image a relatively even, 1 ha ground surface. Model accuracy was investigated to determine the importance of ground control point (GCP) count and differential GNSS base station type. Results generally showed the best performance for tests using five or more GCPs or when a Continuously Operating Reference Station (CORS) was used, with vertical root mean square errors of 0.026 and 0.027 m in these cases. However, accuracy outputs generally met comparable published results in the literature, demonstrating the viability of analyses relying solely on a temporary local base with a one hour dwell time and no GCPs

    Deep Learning-Based Single Image Super-Resolution: An Investigation for Dense Scene Reconstruction with UAS Photogrammetry

    No full text
    The deep convolutional neural network (DCNN) has recently been applied to the highly challenging and ill-posed problem of single image super-resolution (SISR), which aims to predict high-resolution (HR) images from their corresponding low-resolution (LR) images. In many remote sensing (RS) applications, spatial resolution of the aerial or satellite imagery has a great impact on the accuracy and reliability of information extracted from the images. In this study, the potential of a DCNN-based SISR model, called enhanced super-resolution generative adversarial network (ESRGAN), to predict the spatial information degraded or lost in a hyper-spatial resolution unmanned aircraft system (UAS) RGB image set is investigated. ESRGAN model is trained over a limited number of original HR (50 out of 450 total images) and virtually-generated LR UAS images by downsampling the original HR images using a bicubic kernel with a factor × 4 . Quantitative and qualitative assessments of super-resolved images using standard image quality measures (IQMs) confirm that the DCNN-based SISR approach can be successfully applied on LR UAS imagery for spatial resolution enhancement. The performance of DCNN-based SISR approach for the UAS image set closely approximates performances reported on standard SISR image sets with mean peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index values of around 28 dB and 0.85 dB, respectively. Furthermore, by exploiting the rigorous Structure-from-Motion (SfM) photogrammetry procedure, an accurate task-based IQM for evaluating the quality of the super-resolved images is carried out. Results verify that the interior and exterior imaging geometry, which are extremely important for extracting highly accurate spatial information from UAS imagery in photogrammetric applications, can be accurately retrieved from a super-resolved image set. The number of corresponding keypoints and dense points generated from the SfM photogrammetry process are about 6 and 17 times more than those extracted from the corresponding LR image set, respectively

    Unsupervised Clustering Method for Complexity Reduction of Terrestrial Lidar Data in Marshes

    No full text
    Accurate characterization of marsh elevation and landcover evolution is important for coastal management and conservation. This research proposes a novel unsupervised clustering method specifically developed for segmenting dense terrestrial laser scanning (TLS) data in coastal marsh environments. The framework implements unsupervised clustering with the well-known K-means algorithm by applying an optimization to determine the “k” clusters. The fundamental idea behind this novel framework is the application of multi-scale voxel representation of 3D space to create a set of features that characterizes the local complexity and geometry of the terrain. A combination of point- and voxel-generated features are utilized to segment 3D point clouds into homogenous groups in order to study surface changes and vegetation cover. Results suggest that the combination of point and voxel features represent the dataset well. The developed method compresses millions of 3D points representing the complex marsh environment into eight distinct clusters representing different landcover: tidal flat, mangrove, low marsh to high marsh, upland, and power lines. A quantitative assessment of the automated delineation of the tidal flat areas shows acceptable results considering the proposed method is unsupervised with no training data. Clustering results based on K-means are also compared to results based on the Self Organizing Map (SOM) clustering algorithm. Results demonstrate that the developed multi-scale voxelization approach and representative feature set are transferrable to other clustering algorithms, thereby providing an unsupervised framework for intelligent scene segmentation of TLS point cloud data in marshes
    corecore