38 research outputs found

    Identification of Brush Species and Herbicide Effect Assessment in Southern Texas Using an Unoccupied Aerial System (UAS)

    Get PDF
    Cultivation and grazing since the mid-nineteenth century in Texas has caused dramatic changes in grassland vegetation. Among these changes is the encroachment of native and introduced brush species. The distribution and quantity of brush can affect livestock production and water holding capacity of soil. Still, at the same time, brush can improve carbon sequestration and enhance agritourism and real estate value. The accurate identification of brush species and their distribution over large land tracts are important in developing brush management plans which may include herbicide application decisions. Near-real-time imaging and analyses of brush using an Unoccupied Aerial System (UAS) is a powerful tool to achieve such tasks. The use of multispectral imagery collected by a UAS to estimate the efficacy of herbicide treatment on noxious brush has not been evaluated previously. There has been no previous comparison of band combinations and pixel- and object-based methods to determine the best methodology for discrimination and classification of noxious brush species with Random Forest (RF) classification. In this study, two rangelands in southern Texas with encroachment of huisache (Vachellia farnesianna [L.] Wight & Arn.) and honey mesquite (Prosopis glandulosa Torr. var. glandulosa) were studied. Two study sites were flown with an eBee X fixed-wing to collect UAS images with four bands (Green, Red, Red-Edge, and Near-infrared) and ground truth data points pre- and post-herbicide application to study the herbicide effect on brush. Post-herbicide data were collected one year after herbicide application. Pixel-based and object-based RF classifications were used to identify brush in orthomosaic images generated from UAS images. The classification had an overall accuracy in the range 83–96%, and object-based classification had better results than pixel-based classification since object-based classification had the highest overall accuracy in both sites at 96%. The UAS image was useful for assessing herbicide efficacy by calculating canopy change after herbicide treatment. Different effects of herbicides and application rates on brush defoliation were measured by comparing canopy change in herbicide treatment zones. UAS-derived multispectral imagery can be used to identify brush species in rangelands and aid in objectively assessing the herbicide effect on brush encroachment

    Use of Fire-Extinguishing Balls for a Conceptual System of Drone-Assisted Wildfire Fighting

    Full text link
    This paper examines the potential use of fire extinguishing balls as part of a proposed system, where drone and remote-sensing technologies are utilized cooperatively as a supplement to traditional firefighting methods. The proposed system consists of (1) scouting unmanned aircraft system (UAS) to detect spot fires and monitor the risk of wildfire approaching a building, fence, and/or firefighting crew via remote sensing, (2) communication UAS to establish and extend the communication channel between scouting UAS and fire-fighting UAS, and (3) a fire-fighting UAS autonomously traveling to the waypoints to drop fire extinguishing balls (environmental friendly, heat activated suppressants). This concept is under development through a transdisciplinary multi-institutional project. The scope of this paper encloses general illustration of this design, and the experiments conducted so far to evaluate fire extinguishing balls. The results of the experiments show that smaller size fire extinguishing balls available in the global marketplace attached to drones might not be effective in aiding in building fires (unless there are open windows in the buildings already). On the contrary, results show that even the smaller size fire extinguishing balls might be effective in extinguishing short grass fires (around 0.5 kg size ball extinguished a circle of 1-meter of short grass). This finding guided the authors towards wildfire fighting rather than building fires. The paper also demonstrates building of heavy payload drones (around 15 kg payload), and the progress of development of an apparatus carrying fire-extinguishing balls attachable to drones

    Evaluating the Performance of sUAS Photogrammetry with PPK Positioning for Infrastructure Mapping

    Full text link
    Traditional acquisition methods for generating digital surface models (DSMs) of infrastructure are either low resolution and slow (total station-based methods) or expensive (LiDAR). By contrast, photogrammetric methods have recently received attention due to their ability to generate dense 3D models quickly for low cost. However, existing frameworks often utilize many manually measured control points, require a permanent RTK/PPK reference station, or yield a reconstruction accuracy too poor to be useful in many applications. In addition, the causes of inaccuracy in photogrammetric imagery are complex and sometimes not well understood. In this study, a small unmanned aerial system (sUAS) was used to rapidly image a relatively even, 1 ha ground surface. Model accuracy was investigated to determine the importance of ground control point (GCP) count and differential GNSS base station type. Results generally showed the best performance for tests using five or more GCPs or when a Continuously Operating Reference Station (CORS) was used, with vertical root mean square errors of 0.026 and 0.027 m in these cases. However, accuracy outputs generally met comparable published results in the literature, demonstrating the viability of analyses relying solely on a temporary local base with a one hour dwell time and no GCPs

    Deep Learning-Based Single Image Super-Resolution: An Investigation for Dense Scene Reconstruction with UAS Photogrammetry

    Full text link
    The deep convolutional neural network (DCNN) has recently been applied to the highly challenging and ill-posed problem of single image super-resolution (SISR), which aims to predict high-resolution (HR) images from their corresponding low-resolution (LR) images. In many remote sensing (RS) applications, spatial resolution of the aerial or satellite imagery has a great impact on the accuracy and reliability of information extracted from the images. In this study, the potential of a DCNN-based SISR model, called enhanced super-resolution generative adversarial network (ESRGAN), to predict the spatial information degraded or lost in a hyper-spatial resolution unmanned aircraft system (UAS) RGB image set is investigated. ESRGAN model is trained over a limited number of original HR (50 out of 450 total images) and virtually-generated LR UAS images by downsampling the original HR images using a bicubic kernel with a factor × 4 . Quantitative and qualitative assessments of super-resolved images using standard image quality measures (IQMs) confirm that the DCNN-based SISR approach can be successfully applied on LR UAS imagery for spatial resolution enhancement. The performance of DCNN-based SISR approach for the UAS image set closely approximates performances reported on standard SISR image sets with mean peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index values of around 28 dB and 0.85 dB, respectively. Furthermore, by exploiting the rigorous Structure-from-Motion (SfM) photogrammetry procedure, an accurate task-based IQM for evaluating the quality of the super-resolved images is carried out. Results verify that the interior and exterior imaging geometry, which are extremely important for extracting highly accurate spatial information from UAS imagery in photogrammetric applications, can be accurately retrieved from a super-resolved image set. The number of corresponding keypoints and dense points generated from the SfM photogrammetry process are about 6 and 17 times more than those extracted from the corresponding LR image set, respectively

    Review and Evaluation of Deep Learning Architectures for Efficient Land Cover Mapping with UAS Hyper-Spatial Imagery: A Case Study Over a Wetland

    Full text link
    Deep learning has already been proved as a powerful state-of-the-art technique for many image understanding tasks in computer vision and other applications including remote sensing (RS) image analysis. Unmanned aircraft systems (UASs) offer a viable and economical alternative to a conventional sensor and platform for acquiring high spatial and high temporal resolution data with high operational flexibility. Coastal wetlands are among some of the most challenging and complex ecosystems for land cover prediction and mapping tasks because land cover targets often show high intra-class and low inter-class variances. In recent years, several deep convolutional neural network (CNN) architectures have been proposed for pixel-wise image labeling, commonly called semantic image segmentation. In this paper, some of the more recent deep CNN architectures proposed for semantic image segmentation are reviewed, and each model’s training efficiency and classification performance are evaluated by training it on a limited labeled image set. Training samples are provided using the hyper-spatial resolution UAS imagery over a wetland area and the required ground truth images are prepared by manual image labeling. Experimental results demonstrate that deep CNNs have a great potential for accurate land cover prediction task using UAS hyper-spatial resolution images. Some simple deep learning architectures perform comparable or even better than complex and very deep architectures with remarkably fewer training epochs. This performance is especially valuable when limited training samples are available, which is a common case in most RS applications
    corecore