16 research outputs found

    Combining Multiple Geospatial Data for Estimating Aboveground Biomass in North Carolina Forests

    No full text
    Mapping and quantifying forest inventories are critical for the management and development of forests for natural resource conservation and for the evaluation of the aboveground forest biomass (AGFB) technically available for bioenergy production. The AGFB estimation procedures that rely on traditional, spatially sparse field inventory samples constitute a problem for geographically diverse regions such as the state of North Carolina in the southeastern U.S. We propose an alternative AGFB estimation procedure that combines multiple geospatial data. The procedure uses land cover maps to allocate forested land areas to alternative forest types; uses the light detection and ranging (LiDAR) data to evaluate tree heights; calculates the area-total AGFB using region- and tree-type-specific functions that relate the tree heights to the AGFB. We demonstrate the procedure for a selected North Carolina region, a 2.3 km2 area randomly chosen in Duplin County. The tree diameter functions are statistically estimated based on the Forest Inventory Analysis (FIA) data, and two publicly available, open source land cover maps, Crop Data Layer (CDL) and National Land Cover Database (NLCD), are compared and contrasted as a source of information on the location and typology of forests in the study area. The assessment of the consistency of forestland mapping derived from the CDL and the NLCD data lets us estimate how the disagreement between the two alternative, widely used maps affects the AGFB estimation. The methodology and the results we present are expected to complement and inform large-scale assessments of woody biomass in the region

    Development of a 3D Kinetic Data Structure adapted for a 3D Spatial Dynamic Field Simulation

    Get PDF
    Les systèmes d'information géographique (SIG) sont employés couramment pour la représentation, la gestion et l'analyse des données spatiales dans un grand nombre de disciplines, notamment les sciences de la terre, l'agriculture, la sylviculture, la météorologie, l'océanographie et plusieurs autres. Plus particulièrement, les géoscientifiques utilisent de plus en plus ces outils pour l'intégration et la gestion de données dans différents types d'applications environnementales, allant de la gestion des ressources en eau à l'étude du réchauffement climatique. Au delà de ces possibilités, les géoscientifiques doivent modéliser et simuler des champs spatiaux dynamiques et 3D et intégrer aisément les résultats de simulation à d'autres informations spatiales associées afin d'avoir une meilleure compréhension de l'environnement. Cependant, les SIG demeurent extrêmement limités pour la modélisation et la simulation des champs spatiaux qui sont habituellement tridimensionnels et dynamiques. Ces limitations sont principalement reliées aux structures de données spatiales actuelles des SIG qui sont bidimensionnelles et statiques et ne sont pas conçues pour aborder le 3D et les aspects dynamiques des champs spatiaux 3D. Par conséquent, l'objectif principal de ce travail de recherche est d'améliorer la capacité actuelle des SIG concernant la modélisation et la simulation des champs spatiaux dynamiques et 3D par le développement d'une structure de données spatiale 3D cinétique. Selon notre revue de littérature, la tetraèdrisation Delaunay dynamique 3D (DT) et sa structure duale, le diagramme Voronoi 3D (VD), ont un potentiel intéressant pour manipuler la nature tridimensionnelle et dynamique de ce genre de phénomène. Cependant, en raison de l'échantillonnage particulier des données utilisées dans les applications en géosciences, la tetraèdrisation Delaunay de telles données est souvent inadéquate pour l'intégration et la simulation numériques de champs dynamiques. Par exemple, dans une simulation hydrogéologique, les données sont réparties irrégulièrement i.e. verticalement denses et horizontalement clairsemées, ce qui peut résulter en une tessellation inadéquate dont les éléments seront soit très grands, soit très petits, soit très minces. La taille et la forme des éléments formant la tessellation ont un impact important sur l'exactitude des résultats de la simulation, ainsi que sur les coûts de calcul qui y sont reliés. Par conséquent, la première étape de notre travail de recherche est consacrée au développement d’une méthode de raffinement adaptative basée sur la structure de données Delaunay dynamique 3D et à la construction d’une tessellation 3D adaptative pour la représentation et la simulation de champs dynamiques. Cette tessellation s’ajuste à la complexité des champs, en considérant les discontinuités et les critères de forme et de taille. Afin de traiter le comportement dynamique des champs 3D dynamiques dans SIG, nous étendons dans la deuxième étape de cette recherche le VD 3D dynamique au VD 3D cinématique pour pouvoir mettre à jour en temps réel la tessellation 3D lors des procédés de simulation dynamique. Puis, nous montrons comment une telle structure de données spatiale peut soutenir les éléments en mouvement à l’intérieur de la tessellation ainsi que leurs interactions. La structure de données cinétique proposée dans cette recherche permet de gérer de manière élégante les changements de connectivité entre les éléments en mouvement dans la tessellation. En outre, les problèmes résultant de l'utilisation d’intervalles de temps fixes, tels que les dépassements et les collisions non détectées, sont abordés en fournissant des mécanismes très flexibles permettant de détecter et contrôler différents changements (événements) dans la tessellation Delaunay 3D. Enfin, nous étudions le potentiel de la structure de données spatiale cinétique 3D pour la simulation de champs dynamiques dans l'espace tridimensionnel. À cette fin, nous décrivons en détail les différentes étapes menant à l'adaptation de cette structure de données, de sa discrétisation pour des champs 3D continus à son intégration numérique basée sur une méthode événementielle. Nous démontrons également comment la tessellation se déplace et comment la topologie, la connectivité, et les paramètres physiques des cellules de la tessellation sont localement mis à jour suite à un événement topologique survenant dans la tessellation. Trois études de cas sont présentées dans la thèse pour la validation de la structure de données spatiale proposée, et de son potentiel pour la simulation de champs spatiaux 3D et dynamiques. Selon nos observations, pendant le procédé de simulation, la structure de données est préservée et l'information 3D spatiale est gérée adéquatement. En outre, les résultats calculés à partir des expérimentations sont très satisfaisants et sont comparables aux résultats obtenus à partir d'autres méthodes existantes, pour la simulation des mêmes champs dynamiques. En conclusion, certaines des limites de l'approche proposée liées au développement de la structure de données 3D cinétique et à son adaptation pour la représentation et la simulation d'un champ spatial 3D et dynamique sont discutées, et quelques solutions sont suggérées pour l'amélioration de l'approche proposée.Geographic information systems (GIS) are widely used for representation, management and analysis of spatial data in many disciplines including geosciences, agriculture, forestry, metrology and oceanography etc. In particular, geoscientists have increasingly used these tools for data integration and management purposes in many environmental applications ranging from water resources management to global warming study. Beyond these capabilities, geoscientists need to model and simulate 3D dynamic spatial fields and readily integrate those results with other relevant spatial information in order to have a better understating of the environment. However, GIS are very limited for modeling and simulation of spatial fields which are mostly three dimensional and dynamic. These limitations are mainly related to the existing GIS spatial data structures which are 2D and static and are not designed to address the 3D and dynamic aspects of continuous fields. Hence, the main objective of this research work is to improve the current GIS capabilities for modeling and simulation of 3D spatial dynamic fields by development of a 3D kinetic data structure. Based on our literature review, 3D dynamic Delaunay tetrahedralization (DT) and its dual, 3D Voronoi diagram (VD), have many interesting potentials for handling the 3D and dynamic nature of those kind of phenomena. However, because of the special configurations of datasets in geosciences applications, the DT of such data is often inadequate for numerical integration and simulation of dynamic field. For example, in a hydrogeological simulation, the data form highly irregular set of points aligned in vertical direction and very sparse horizontally which may result in very large, small or thin tessellation elements. The size and shape of tessellation elements have an important impact on the accuracy of the results of the simulation of a field as well as the related computational costs. Therefore, in the first step of the research work, we develop an adaptive refinement method based on 3D dynamic Delaunay data structure, and construct a 3D adaptive tessellation for the representation and simulation of a dynamic field. This tessellation is conformed to represent the complexity of fields, considering the discontinuities and the shape and size criteria. In order to deal with the dynamic behavior of 3D spatial fields in a moving framework within GIS, in the second step, we extend 3D dynamic VD to 3D kinetic VD in the sense of being capable of keeping update the 3D spatial tessellation during a dynamic simulation process. Then, we show how such a spatial data structure can support moving elements within the tessellation and their interactions. The proposed kinetic data structure provides an elegant way for the management of the connectivity changes between moving elements within the tessellation. In addition, the problems resulting from using a fixed time step, such as overshoots and undetected collisions, are addressed by providing very flexible mechanisms to detect and manage different changes (events) in the spatial tessellation by 3D DT. Finally, we study the potentials of the kinetic 3D spatial data structure for the simulation of a dynamic field in 3D space. For this purpose, we describe in detail different steps for the adaption of this data structure from its discretization for a 3D continuous field to its numerical integration based on an event driven method, and show how the tessellation moves and the topology, connectivity, and physical parameters of the tessellation cells are locally updated following any event in the tessellation. For the validation of the proposed spatial data structure itself and its potentials for the simulation of a dynamic field, three case studies are presented in the thesis. According to our observations during the simulation process, the data structure is maintained and the 3D spatial information is managed adequately. Furthermore, the results obtained from the experimentations are very satisfactory and are comparable with results obtained from other existing methods for the simulation of the same dynamic field. Finally, some of the limitations of the proposed approach related to the development of the 3D kinetic data structure itself and its adaptation for the representation and simulation of a 3D dynamic spatial field are discussed and some solutions are suggested for the improvement of the proposed approach

    Inundated Vegetation Mapping Using SAR Data: A Comparison of Polarization Configurations of UAVSAR L-Band and Sentinel C-Band

    No full text
    Flood events have become intense and more frequent due to heavy rainfall and hurricanes caused by global warming. Accurate floodwater extent maps are essential information sources for emergency management agencies and flood relief programs to direct their resources to the most affected areas. Synthetic Aperture Radar (SAR) data are superior to optical data for floodwater mapping, especially in vegetated areas and in forests that are adjacent to urban areas and critical infrastructures. Investigating floodwater mapping with various available SAR sensors and comparing their performance allows the identification of suitable SAR sensors that can be used to map inundated areas in different land covers, such as forests and vegetated areas. In this study, we investigated the performance of polarization configurations for flood boundary delineation in vegetated and open areas derived from Sentinel1b, C-band, and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band data collected during flood events resulting from Hurricane Florence in the eastern area of North Carolina. The datasets from the sensors for the flooding event collected on the same day and same study area were processed and classified for five landcover classes using a machine learning method—the Random Forest classification algorithm. We compared the classification results of linear, dual, and full polarizations of the SAR datasets. The L-band fully polarized data classification achieved the highest accuracy for flood mapping as the decomposition of fully polarized SAR data allows land cover features to be identified based on their scattering mechanisms

    Inundated Vegetation Mapping Using SAR Data: A Comparison of Polarization Configurations of UAVSAR L-Band and Sentinel C-Band

    No full text
    Flood events have become intense and more frequent due to heavy rainfall and hurricanes caused by global warming. Accurate floodwater extent maps are essential information sources for emergency management agencies and flood relief programs to direct their resources to the most affected areas. Synthetic Aperture Radar (SAR) data are superior to optical data for floodwater mapping, especially in vegetated areas and in forests that are adjacent to urban areas and critical infrastructures. Investigating floodwater mapping with various available SAR sensors and comparing their performance allows the identification of suitable SAR sensors that can be used to map inundated areas in different land covers, such as forests and vegetated areas. In this study, we investigated the performance of polarization configurations for flood boundary delineation in vegetated and open areas derived from Sentinel1b, C-band, and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band data collected during flood events resulting from Hurricane Florence in the eastern area of North Carolina. The datasets from the sensors for the flooding event collected on the same day and same study area were processed and classified for five landcover classes using a machine learning method—the Random Forest classification algorithm. We compared the classification results of linear, dual, and full polarizations of the SAR datasets. The L-band fully polarized data classification achieved the highest accuracy for flood mapping as the decomposition of fully polarized SAR data allows land cover features to be identified based on their scattering mechanisms

    Three-Dimensional Inundation Mapping Using UAV Image Segmentation and Digital Surface Model

    No full text
    Flood occurrence is increasing due to the expansion of urbanization and extreme weather like hurricanes; hence, research on methods of inundation monitoring and mapping has increased to reduce the severe impacts of flood disasters. This research studies and compares two methods for inundation depth estimation using UAV images and topographic data. The methods consist of three main stages: (1) extracting flooded areas and create 2D inundation polygons using deep learning; (2) reconstructing 3D water surface using the polygons and topographic data; and (3) deriving a water depth map using the 3D reconstructed water surface and a pre-flood DEM. The two methods are different at reconstructing the 3D water surface (stage 2). The first method uses structure from motion (SfM) for creating a point cloud of the area from overlapping UAV images, and the water polygons resulted from stage 1 is applied for water point cloud classification. While the second method reconstructs the water surface by intersecting the water polygons and a pre-flood DEM created using the pre-flood LiDAR data. We evaluate the proposed methods for inundation depth mapping over the Town of Princeville during a flooding event during Hurricane Matthew. The methods are compared and validated using the USGS gauge water level data acquired during the flood event. The RMSEs for water depth using the SfM method and integrated method based on deep learning and DEM were 0.34m and 0.26m, respectively

    Flood Extent Mapping: An Integrated Method Using Deep Learning and Region Growing Using UAV Optical Data

    No full text
    Flooding occurs frequently and causes loss of lives, and extensive damages to infrastructure and the environment. Accurate and timely mapping of flood extent to ascertain damages is critical and essential for relief activities. Recently, deep-learning-based approaches, including convolutional neural network (CNN) has shown promising results for flood extent mapping. However, these methods cannot extract floods underneath vegetation canopy using the optical imagery. This article attempts to address this problem by introducing an integrated CNN and region growing (RG) method for the mapping of both visible and underneath vegetation flooded areas. The CNN-based classifier is used to extract flooded areas from the optical images, whereas, the RG method is applied to estimate the extent of floods underneath vegetation that are not visible from imagery using the digital elevation model. A data augmentation technique is applied for training the CNN-based classifier to improve the classification results. The results show that the data augmentation can enhance the accuracy of image classification and the proposed integrated method efficiently detects floods in both the visible and the areas covered by vegetation, which is essential to supporting effective flood emergency response and recovery activities

    Deep Convolutional Neural Network for Flood Extent Mapping Using Unmanned Aerial Vehicles Data

    No full text
    Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively

    Best Practices and Lessons Learned in Grant Writing for Ag/Applied Economists to Engage in Interdisciplinary Studies

    No full text
    Learning to write successful grant applications takes significant time and effort. This paper presents knowledge, expertise, and strategies from experienced grant applicants and grant officers across several disciplines to support early career scholars and first-time grant writers, with particular guidance for interdisciplinary collaboration. Many Agricultural and Applied Economists are invited to participate in interdisciplinary grant applications. It is important to fully understand the types of projects, nature of collaboration, co-investigators’ characteristics, expected contributions, anticipated benefits, and valuation of collaborative research by one’s peers before initiating new opportunities. Leading and participating in interdisciplinary teams also requires mentorship, patience, professionalism, and excellent communication beyond the scientific merits. This paper shares practical insights to guide scholars through the grant-writing processes beginning with nurturing a mindset, preparing for a consistent work ethic, actively seeking advice, identifying targeted programs, matching a programs’ priorities, a step-by-step framework for team creation and management, effectively managing time and pressure, and transforming failure into success
    corecore