18 research outputs found

    Unoccupied aerial systems temporal phenotyping and phenomic selection for maize breeding and genetics

    Get PDF
    Emerging tools in plant phenomics and high throughput field phenotyping are redefining possibilities for objective decision support in plant breeding and agronomy as well as discoveries in plant biology and the plant sciences. Unoccupied aerial systems (UAS, i.e. drones) have allowed inexpensive and rapid remote sensing for many genotypes throughout time in relevant field settings. UAS phenomics approaches have iterated rapidly, mimicking genomics progression over the last 30 years; the progression of UAS equipment parallels that of DNA-markers; while UAS analytics parallels progression from single marker linkage mapping to genomic selection. The TAMU maize breeding program first focused on using UAS to automate routine traits (plant height, plant population, etc.) comparing these to ground reference measurements. Finding success, we next focused on developing novel measurements impractical or impossible with manual collection such as plant growth and vegetation index curves. UAS plant growth curves measured in a genetic mapping populations has allowed discovery of temporal variation in quantitative trait loci (QTL). Now, phenomic selection approaches are being tested using temporal UAS, as first described using near infrared reflectance spectroscopy (NIRS) of grain. Phenomic selection is similar to genomic selection but uses a multitude of plant phenotypic measurements to identify relatedness and predict germplasm performance. Phenotypic measurements are thus treated as random markers with the underlying genetic or physiological cause remaining unknown. Using multiple extracted image features from multiple time points, genotype rankings have been successfully predicted for grain yield. Among the most exciting aspects have been identifying novel segregating physiological phenotypes important in prediction, which occur in growth stages earlier than previously evaluated. Similarly, UAS have allowed investigating plant responses to biotic and abiotic stress over time. UAS findings and approaches permit new fundamental plant biology and physiology research, which is catalyzing a new era in the plant sciences

    Image to Image Deep Learning for Enhanced Vegetation Height Modeling in Texas

    No full text
    Vegetation canopy height mapping is vital for forest monitoring. However, the high cost and inefficiency of manual tree measurements, coupled with the irregular and limited local-scale acquisition of airborne LiDAR data, continue to impede its widespread application. The increasing availability of high spatial resolution imagery is creating opportunities to characterize forest attributes at finer resolutions over large regions. In this study, we investigate the synergy of airborne lidar and high spatial resolution USDA-NAIP imagery for detailed canopy height mapping using an image-to-image deep learning approach. Our main inputs were 1 m NAIP image patches which served as predictor layers and corresponding 1 m canopy height models derived from airborne lidar data, which served as output layers. We adapted a U-Net model architecture for canopy height regression, training and validating the models with 10,000 256-by-256 pixel image patches. We evaluated three settings for the U-Net encoder depth and used both 1 m and 2 m datasets to assess their impact on model performance. Canopy height predictions from the fitted models were highly correlated (R2 = 0.70 to 0.89), precise (MAE = 1.37–2.21 m), and virtually unbiased (Bias = −0.20–0.07 m) with respect to validation data. The trained models also performed adequately well on the independent test data (R2 = 0.62–0.78, MAE = 3.06–4.1 m). Models with higher encoder depths (3,4) and trained with 2 m data provide better predictions than models with encoder depth 2 and trained on 1 m data. Inter-comparisons with existing canopy height products also showed our canopy height map provided better agreement with reference airborne lidar canopy height estimates. This study shows the potential of developing regional canopy height products using airborne lidar and NAIP imagery to support forest productivity and carbon modeling at spatially detailed scales. The 30 m canopy height map generated over Texas holds promise in advancing economic and sustainable forest management goals and enhancing decision-making in natural resource management across the state

    Estimating canopy cover from ICESat-2

    No full text
    1041571593NASA ICESat-2 Science Tea

    Landsat-Scale Regional Forest Canopy Height Mapping Using ICESat-2 Along-Track Heights: Case Study of Eastern Texas

    No full text
    Spaceborne profiling lidar missions such as the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) are collecting direct elevation measurements, supporting the retrieval of vegetation attributes such as canopy height that are crucial in forest carbon and ecological studies. However, such profiling lidar systems collect observations along predefined ground tracks which limit the spatially complete mapping of forest canopy height. We demonstrate that the fusion of ICESat-2 along-track canopy height estimates and ancillary Landsat and LANDFIRE (Landscape Fire and Resource Management Planning Tools Project) data can enable the generation of spatially complete canopy height data at a regional level in the United States. We developed gradient-boosted regression models relating canopy heights with ancillary data values and used them to predict canopy height in unobserved locations at a 30 m spatial resolution. Model performance varied (R2 = 0.44 − 0.50, MAE = 2.61–2.80 m) when individual (per month) Landsat data and LANDFIRE data were used. Improved performance was observed when combined Landsat and LANDFIRE data were used (R2 = 0.69, MAE = 2.09 m). We produced a gridded canopy height product over our study area in eastern Texas, which agreed moderately (R2 = 0.46, MAE = 4.38 m) with independent airborne lidar-derived canopy heights. Further, we conducted a comparative assessment with the Global Forest Canopy Height product, an existing 30 m spatial resolution canopy height product generated using GEDI (Global Ecosystem Dynamics Investigation) canopy height and multitemporal Landsat data. In general, our product showed better agreement with airborne lidar heights than the global dataset (R2 = 0.19 MAE = 5.83 m). Major differences in canopy height values between the two products are attributed to land cover changes, height metrics used (98th in this study vs 95th percentile), and the inherent differences in lidar sampling and their geolocation uncertainties between ICESat-2 and GEDI. On the whole, our integration of ICESat-2 data with ancillary datasets was effective for spatially complete canopy height mapping. For better modeling performance, we recommend the careful selection of ICESat-2 datasets to remove erroneous data and applying a series of Landsat data to account for phenological changes. The canopy height product provides a valuable spatially detailed and synoptic view of canopy heights over the study area, which would support various forestry and ecological assessments at an enhanced 30 Landsat spatial resolution

    ICESat-2 for Canopy Cover Estimation at Large-Scale on a Cloud-Based Platform

    No full text
    Forest canopy cover is an essential biophysical parameter of ecological significance, especially for characterizing woodlands and forests. This research focused on using data from the ICESat-2/ATLAS spaceborne lidar sensor, a photon-counting altimetry system, to map the forest canopy cover over a large country extent. The study proposed a novel approach to compute categorized canopy cover using photon-counting data and available ancillary Landsat images to build the canopy cover model. In addition, this research tested a cloud-mapping platform, the Google Earth Engine (GEE), as an example of a large-scale study. The canopy cover map of the Republic of Türkiye produced from this study has an average accuracy of over 70%. Even though the results were promising, it has been determined that the issues caused by the auxiliary data negatively affect the overall success. Moreover, while GEE offered many benefits, such as user-friendliness and convenience, it had processing limits that posed challenges for large-scale studies. Using weak or strong beams’ segments separately did not show a significant difference in estimating canopy cover. Briefly, this study demonstrates the potential of using photon-counting data and GEE for mapping forest canopy cover at a large scale

    Using ICESat-2 to Estimate and Map Forest Aboveground Biomass: A First Example

    No full text
    National Aeronautics and Space Administration’s (NASA’s) Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) provides rich insights over the Earth’s surface through elevation data collected by its Advanced Topographic Laser Altimeter System (ATLAS) since its launch in September 2018. While this mission is primarily aimed at capturing ice measurements, ICESat-2 also provides data over vegetated areas, offering the capability to gain insights into ecosystem structure and the potential to contribute to the sustainable management of forests. This study involved an examination of the utility of ICESat-2 for estimating forest aboveground biomass (AGB). The objectives of this study were to: (1) investigate the use of canopy metrics for estimating AGB, using data extracted from an ICESat-2 transect over forests in south-east Texas; (2) compare the accuracy for estimating AGB using data from the strong beam and weak beam; and (3) upscale predicted AGB estimates using variables from Landsat multispectral imagery and land cover and canopy cover maps, to generate a 30 m spatial resolution AGB map. Methods previously developed with simulated ICESat-2 data over Sam Houston National Forest (SHNF) in southeast Texas were adapted using actual data from an adjacent ICESat-2 transect over similar vegetation conditions. Custom noise filtering and photon classification algorithms were applied to ICESat-2’s geolocated photon data (ATL03) for one beam pair, consisting of a strong and weak beam, and canopy height estimates were retrieved. Canopy height parameters were extracted from 100 m segments in the along-track direction for estimating AGB, using regression analysis. ICESat-2-derived AGB estimates were then extrapolated to develop a 30 m AGB map for the study area, using vegetation indices from Landsat 8 Operational Land Imager (OLI), National Land Cover Database (NLCD) landcover and canopy cover, with random forests (RF). The AGB estimation models used few canopy parameters and suggest the possibility for applying well-developed methods for modeling AGB with airborne light detection and ranging (lidar) data, using processed ICESat-2 data. The final regression model achieved a R2 and root mean square error (RMSE) value of 0.62 and 24.63 Mg/ha for estimating AGB and RF model evaluation with a separate test set yielded a R2 of 0.58 and RMSE of 23.89 Mg/ha. Findings provide an initial look at the ability of ICESat-2 to estimate AGB and serve as a basis for further upscaling efforts

    From LiDAR Waveforms to Hyper Point Clouds: A Novel Data Product to Characterize Vegetation Structure

    No full text
    Full waveform (FW) LiDAR holds great potential for retrieving vegetation structure parameters at a high level of detail, but this prospect is constrained by practical factors such as the lack of available handy processing tools and the technical intricacy of waveform processing. This study introduces a new product named the Hyper Point Cloud (HPC), derived from FW LiDAR data, and explores its potential applications, such as tree crown delineation using the HPC-based intensity and percentile height (PH) surfaces, which shows promise as a solution to the constraints of using FW LiDAR data. The results of the HPC present a new direction for handling FW LiDAR data and offer prospects for studying the mid-story and understory of vegetation with high point density (~182 points/m2). The intensity-derived digital surface model (DSM) generated from the HPC shows that the ground region has higher maximum intensity (MAXI) and mean intensity (MI) than the vegetation region, while having lower total intensity (TI) and number of intensities (NI) at a given grid cell. Our analysis of intensity distribution contours at the individual tree level exhibit similar patterns, indicating that the MAXI and MI decrease from the tree crown center to the tree boundary, while a rising trend is observed for TI and NI. These intensity variable contours provide a theoretical justification for using HPC-based intensity surfaces to segment tree crowns and exploit their potential for extracting tree attributes. The HPC-based intensity surfaces and the HPC-based PH Canopy Height Models (CHM) demonstrate promising tree segmentation results comparable to the LiDAR-derived CHM for estimating tree attributes such as tree locations, crown widths and tree heights. We envision that products such as the HPC and the HPC-based intensity and height surfaces introduced in this study can open new perspectives for the use of FW LiDAR data and alleviate the technical barrier of exploring FW LiDAR data for detailed vegetation structure characterization

    A Deep Learning Semantic Segmentation-Based Approach for Field-Level Sorghum Panicle Counting

    No full text
    Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts

    Woody Plant Encroachment: Evaluating Methodologies for Semiarid Woody Species Classification from Drone Images

    No full text
    Globally, native semiarid grasslands and savannas have experienced a densification of woody plant species—leading to a multitude of environmental, economic, and cultural changes. These encroached areas are unique in that the diversity of tree species is small, but at the same time the individual species possess diverse phenological responses. The overall goal of this study was to evaluate the ability of very high resolution drone imagery to accurately map species of woody plants encroaching on semiarid grasslands. For a site in the Edwards Plateau ecoregion of central Texas, we used affordable, very high resolution drone imagery to which we applied maximum likelihood (ML), support vector machine (SVM), random forest (RF), and VGG-19 convolutional neural network (CNN) algorithms in combination with pixel-based (with and without post-processing) and object-based (small and large) classification methods. Based on test sample data (n = 1000) the VGG-19 CNN model achieved the highest overall accuracy (96.9%). SVM came in second with an average classification accuracy of 91.2% across all methods, followed by RF (89.7%) and ML (86.8%). Overall, our findings show that RGB drone sensors are indeed capable of providing highly accurate classifications of woody plant species in semiarid landscapes—comparable to and even greater in some regards to those achieved by aerial and drone imagery using hyperspectral sensors in more diverse landscapes
    corecore