568 research outputs found

    Quantifying the urban forest environment using dense discrete return LiDAR and aerial color imagery for segmentation and object-level biomass assessment

    Get PDF
    The urban forest is becoming increasingly important in the contexts of urban green space and recreation, carbon sequestration and emission offsets, and socio-economic impacts. In addition to aesthetic value, these green spaces remove airborne pollutants, preserve natural resources, and mitigate adverse climate changes, among other benefits. A great deal of attention recently has been paid to urban forest management. However, the comprehensive monitoring of urban vegetation for carbon sequestration and storage is an under-explored research area. Such an assessment of carbon stores often requires information at the individual tree level, necessitating the proper masking of vegetation from the built environment, as well as delineation of individual tree crowns. As an alternative to expensive and time-consuming manual surveys, remote sensing can be used effectively in characterizing the urban vegetation and man-made objects. Many studies in this field have made use of aerial and multispectral/hyperspectral imagery over cities. The emergence of light detection and ranging (LiDAR) technology, however, has provided new impetus to the effort of extracting objects and characterizing their 3D attributes - LiDAR has been used successfully to model buildings and urban trees. However, challenges remain when using such structural information only, and researchers have investigated the use of fusion-based approaches that combine LiDAR and aerial imagery to extract objects, thereby allowing the complementary characteristics of the two modalities to be utilized. In this study, a fusion-based classification method was implemented between high spatial resolution aerial color (RGB) imagery and co-registered LiDAR point clouds to classify urban vegetation and buildings from other urban classes/cover types. Structural, as well as spectral features, were used in the classification method. These features included height, flatness, and the distribution of normal surface vectors from LiDAR data, along with a non-calibrated LiDAR-based vegetation index, derived from combining LiDAR intensity at 1064 nm with the red channel of the RGB imagery. This novel index was dubbed the LiDAR-infused difference vegetation index (LDVI). Classification results indicated good separation between buildings and vegetation, with an overall accuracy of 92% and a kappa statistic of 0.85. A multi-tiered delineation algorithm subsequently was developed to extract individual tree crowns from the identified tree clusters, followed by the application of species-independent biomass models based on LiDAR-derived tree attributes in regression analysis. These LiDAR-based biomass assessments were conducted for individual trees, as well as for clusters of trees, in cases where proper delineation of individual trees was impossible. The detection accuracy of the tree delineation algorithm was 70%. The LiDAR-derived biomass estimates were validated against allometry-based biomass estimates that were computed from field-measured tree data. It was found out that LiDAR-derived tree volume, area, and different distribution parameters of height (e.g., maximum height, mean of height) are important to model biomass. The best biomass model for the tree clusters and the individual trees showed an adjusted R-Squared value of 0.93 and 0.58, respectively. The results of this study showed that the developed fusion-based classification approach using LiDAR and aerial color (RGB) imagery is capable of producing good object detection accuracy. It was concluded that the LDVI can be used in vegetation detection and can act as a substitute for the normalized difference vegetation index (NDVI), when near-infrared multiband imagery is not available. Furthermore, the utility of LiDAR for characterizing the urban forest and associated biomass was proven. This work could have significant impact on the rapid and accurate assessment of urban green spaces and associated carbon monitoring and management

    Unmanned Aerial Systems for Wildland and Forest Fires

    Full text link
    Wildfires represent an important natural risk causing economic losses, human death and important environmental damage. In recent years, we witness an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland and forest fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small scale environments. However, wildfires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial Systems (UAS) were proposed. UAS have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper we review previous work related to the use of UAS in wildfires. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, we present some of the recent frameworks proposing the use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at: https://doi.org/10.3390/drones501001

    Relating Multimodal Imagery Data in 3D

    Get PDF
    This research develops and improves the fundamental mathematical approaches and techniques required to relate imagery and imagery derived multimodal products in 3D. Image registration, in a 2D sense, will always be limited by the 3D effects of viewing geometry on the target. Therefore, effects such as occlusion, parallax, shadowing, and terrain/building elevation can often be mitigated with even a modest amounts of 3D target modeling. Additionally, the imaged scene may appear radically different based on the sensed modality of interest; this is evident from the differences in visible, infrared, polarimetric, and radar imagery of the same site. This thesis develops a `model-centric\u27 approach to relating multimodal imagery in a 3D environment. By correctly modeling a site of interest, both geometrically and physically, it is possible to remove/mitigate some of the most difficult challenges associated with multimodal image registration. In order to accomplish this feat, the mathematical framework necessary to relate imagery to geometric models is thoroughly examined. Since geometric models may need to be generated to apply this `model-centric\u27 approach, this research develops methods to derive 3D models from imagery and LIDAR data. Of critical note, is the implementation of complimentary techniques for relating multimodal imagery that utilize the geometric model in concert with physics based modeling to simulate scene appearance under diverse imaging scenarios. Finally, the often neglected final phase of mapping localized image registration results back to the world coordinate system model for final data archival are addressed. In short, once a target site is properly modeled, both geometrically and physically, it is possible to orient the 3D model to the same viewing perspective as a captured image to enable proper registration. If done accurately, the synthetic model\u27s physical appearance can simulate the imaged modality of interest while simultaneously removing the 3-D ambiguity between the model and the captured image. Once registered, the captured image can then be archived as a texture map on the geometric site model. In this way, the 3D information that was lost when the image was acquired can be regained and properly related with other datasets for data fusion and analysis

    Semantic segmentation of roof materials in urban environment by utilizing hyperspectral and LiDAR data

    Get PDF
    Mapping areas in an urban environment can be challenging due to various materials and manufactured structures. The urban environment is a mix of natural and artificial materials, and finding the right object of a specific material is a challenge even for the trained eye. Therefore, by applying high spectral resolution hyperspectral imagery it is possible to examine surface materials based on spectral signature. Combined with LiDAR, it is also feasible to detect the geometrical structure of the surface. These data can be exposed to a machine learning algorithm to recognize objects automatically. In this study machine learning algorithms are exposed to airborne images of roof materials. This thesis presents an application of semantic segmentation for roof materials based on fused hyperspectral (HySpex VNIR-1800 and SWIR-384) and LiDAR (Riegl VQ-560i) data acquired from 2021 over Bærum municipality near Oslo in Norway. The machine learning algorithm is a semantic segmentation model named Res-U-net with a U-net architecture and a ResNet34 backbone. The Res-U-Net is a supervised neural network with high capacity to learn high-dimensional airborne data. The model returns a mask of the urban area that pinpoints the roofs’ position and materials. The ground truth is generated with information from field work, a geographical database and the watershed algorithm for object detection. This ground truth consists of nine different roof materials and background. The semantic segmentation model is optimized by testing different model configurations for this specific problem. The best model scores 0.903, 0.896, and 0.579 in accuracy score, F1 score weighted and Matthews Correlation Coefficient. For the binary problem of detecting roof the model scores 0.948, 0.946, and 0.767 on the same metrics. This study demonstrates that semantic segmentation is viable for localizing and classifying roof materials with fused hyperspectral and LiDAR data. Such an analysis can potentially automate several mapping chores and manual assignments by systemically processing a larger area in a short time to free human capacity.M-M

    Implementing an object-based multi-index protocol for mapping surface glacier facies from Chandra-Bhaga basin, Himalaya

    Get PDF
    Surface glacier facies are superficial expressions of a glacier that are distinguishable based on differing spectral and structural characteristics according to their age and inter-mixed impurities. Increasing bodies of literature suggest that the varying properties of surface glacier facies differentially influence the melt of the glacier, thus affecting the mass balance. Incorporating these variations into distributed mass balance modelling can improve the perceived accuracy of these models. However, detecting and subsequently mapping these facies with a high degree of accuracy is a necessary precursor to such complex modelling. The variations in the reflectance spectra of various glacier facies permit multiband imagery to exploit band ratios for their effective extraction. However, coarse and medium spatial resolution multispectral imagery can delimit the efficacy of band ratioing by muddling the minor spatial and spectral variations of a glacier. Very high-resolution imagery, on the other hand, creates distortions in the conventionally obtained information extracted through pixel-based classification. Therefore, robust and adaptable methods coupled with higher resolution data products are necessary to effectively map glacier facies. This study endeavours to identify and isolate glacier facies on two unnamed glaciers in the Chandra-Bhaga basin, Himalayas, using an established object-based multi-index protocol. Exploiting the very high resolution offered by WorldView-2 and its eight spectral bands, this study implements customized spectral index ratios via an object-based environment. Pixel-based supervised classification is also performed using three popular classifiers to comparatively gauge the classification accuracies. The object-based multi-index protocol delivered the highest overall accuracy of 86.67%. The Minimum Distance classifier yielded the lowest overall accuracy of 62.50%, whereas, the Mahalanobis Distance and Maximum Likelihood classifiers yielded overall accuracies of 77.50% and 70.84% respectively. The results outline the superiority of the object-based method for extraction of glacier facies. Forthcoming studies must refine the indices and test their applicability in wide ranging scenarios

    Ash Tree Identification Based on the Integration of Hyperspectral Imagery and High-density Lidar Data

    Get PDF
    Monitoring and management of ash trees has become particularly important in recent years due to the heightened risk of attack from the invasive pest, the emerald ash borer (EAB). However, distinguishing ash from other deciduous trees can be challenging. Both hyperspectral imagery and Light detection and ranging (LiDAR) data are two valuable data sources that are often used for tree species classification. Hyperspectral imagery measures detailed spectral reflectance related to the biochemical properties of vegetation, while LiDAR data measures the three-dimensional structure of tree crowns related to morphological characteristics. Thus, the accuracy of vegetation classification may be improved by combining both techniques. Therefore, the objective of this research is to integrate hyperspectral imagery and LiDAR data for improving ash tree identification. Specifically, the research aims include: 1) using LiDAR data for individual tree crowns segmentation; 2) using hyperspectral imagery for extraction of relative pure crown spectra; 3) fusing hyperspectral and LiDAR data for ash tree identification. It is expected that the classification accuracy of ash trees will be significantly improved with the integration of hyperspectral and LiDAR techniques. Analysis results suggest that, first, 3D crown structures of individual trees can be reconstructed using a set of generalized geometric models which optimally matched LiDAR-derived raster image, and crown widths can be further estimated using tree height and shape-related parameters as independent variables and ground measurement of crown widths as dependent variables. Second, with constrained linear spectral mixture analysis method, the fractions of all materials within a pixel can be extracted, and relative pure crown-scale spectra can be further calculated using illuminated-leaf fraction as weighting factors for tree species classification. Third, both crown shape index (SI) and coefficient of variation (CV) can be extracted from LiDAR data as invariant variables in tree’s life cycle, and improve ash tree identification by integrating with pixel-weighted crown spectra. Therefore, three major contributions of this research have been made in the field of tree species classification:1) the automatic estimation of individual tree crown width from LiDAR data by combining a generalized geometric model and a regression model, 2) the computation of relative pure crown-scale spectral reflectance using a pixel-weighting algorithm for tree species classification, 3) the fusion of shape-related structural features and pixel-weighted crown-scale spectral features for improving of ash tree identification

    Multispectral Indices for Wildfire Management

    Full text link
    This paper highlights and summarizes the most important multispectral indices and associated methodologies for fire management. Various fields of study are examined where multispectral indices align with wildfire prevention and management, including vegetation and soil attribute extraction, water feature mapping, artificial structure identification, and post-fire burnt area estimation. The versatility and effectiveness of multispectral indices in addressing specific issues in wildfire management are emphasized. Fundamental insights for optimizing data extraction are presented. Concrete indices for each task, including the NDVI and the NDWI, are suggested. Moreover, to enhance accuracy and address inherent limitations of individual index applications, the integration of complementary processing solutions and additional data sources like high-resolution imagery and ground-based measurements is recommended. This paper aims to be an immediate and comprehensive reference for researchers and stakeholders working on multispectral indices related to the prevention and management of fires

    Derivation of forest inventory parameters from high-resolution satellite imagery for the Thunkel area, Northern Mongolia. A comparative study on various satellite sensors and data analysis techniques.

    Get PDF
    With the demise of the Soviet Union and the transition to a market economy starting in the 1990s, Mongolia has been experiencing dramatic changes resulting in social and economic disparities and an increasing strain on its natural resources. The situation is exacerbated by a changing climate, the erosion of forestry related administrative structures, and a lack of law enforcement activities. Mongolia’s forests have been afflicted with a dramatic increase in degradation due to human and natural impacts such as overexploitation and wildfire occurrences. In addition, forest management practices are far from being sustainable. In order to provide useful information on how to viably and effectively utilise the forest resources in the future, the gathering and analysis of forest related data is pivotal. Although a National Forest Inventory was conducted in 2016, very little reliable and scientifically substantiated information exists related to a regional or even local level. This lack of detailed information warranted a study performed in the Thunkel taiga area in 2017 in cooperation with the GIZ. In this context, we hypothesise that (i) tree species and composition can be identified utilising the aerial imagery, (ii) tree height can be extracted from the resulting canopy height model with accuracies commensurate with field survey measurements, and (iii) high-resolution satellite imagery is suitable for the extraction of tree species, the number of trees, and the upscaling of timber volume and basal area based on the spectral properties. The outcomes of this study illustrate quite clearly the potential of employing UAV imagery for tree height extraction (R2 of 0.9) as well as for species and crown diameter determination. However, in a few instances, the visual interpretation of the aerial photographs were determined to be superior to the computer-aided automatic extraction of forest attributes. In addition, imagery from various satellite sensors (e.g. Sentinel-2, RapidEye, WorldView-2) proved to be excellently suited for the delineation of burned areas and the assessment of tree vigour. Furthermore, recently developed sophisticated classifying approaches such as Support Vector Machines and Random Forest appear to be tailored for tree species discrimination (Overall Accuracy of 89%). Object-based classification approaches convey the impression to be highly suitable for very high-resolution imagery, however, at medium scale, pixel-based classifiers outperformed the former. It is also suggested that high radiometric resolution bears the potential to easily compensate for the lack of spatial detectability in the imagery. Quite surprising was the occurrence of dark taiga species in the riparian areas being beyond their natural habitat range. The presented results matrix and the interpretation key have been devised as a decision tool and/or a vademecum for practitioners. In consideration of future projects and to facilitate the improvement of the forest inventory database, the establishment of permanent sampling plots in the Mongolian taigas is strongly advised.2021-06-0
    • …
    corecore