36 research outputs found

    Multi-source Remote Sensing for Forest Characterization and Monitoring

    Full text link
    As a dominant terrestrial ecosystem of the Earth, forest environments play profound roles in ecology, biodiversity, resource utilization, and management, which highlights the significance of forest characterization and monitoring. Some forest parameters can help track climate change and quantify the global carbon cycle and therefore attract growing attention from various research communities. Compared with traditional in-situ methods with expensive and time-consuming field works involved, airborne and spaceborne remote sensors collect cost-efficient and consistent observations at global or regional scales and have been proven to be an effective way for forest monitoring. With the looming paradigm shift toward data-intensive science and the development of remote sensors, remote sensing data with higher resolution and diversity have been the mainstream in data analysis and processing. However, significant heterogeneities in the multi-source remote sensing data largely restrain its forest applications urging the research community to come up with effective synergistic strategies. The work presented in this thesis contributes to the field by exploring the potential of the Synthetic Aperture Radar (SAR), SAR Polarimetry (PolSAR), SAR Interferometry (InSAR), Polarimetric SAR Interferometry (PolInSAR), Light Detection and Ranging (LiDAR), and multispectral remote sensing in forest characterization and monitoring from three main aspects including forest height estimation, active fire detection, and burned area mapping. First, the forest height inversion is demonstrated using airborne L-band dual-baseline repeat-pass PolInSAR data based on modified versions of the Random Motion over Ground (RMoG) model, where the scattering attenuation and wind-derived random motion are described in conditions of homogeneous and heterogeneous volume layer, respectively. A boreal and a tropical forest test site are involved in the experiment to explore the flexibility of different models over different forest types and based on that, a leveraging strategy is proposed to boost the accuracy of forest height estimation. The accuracy of the model-based forest height inversion is limited by the discrepancy between the theoretical models and actual scenarios and exhibits a strong dependency on the system and scenario parameters. Hence, high vertical accuracy LiDAR samples are employed to assist the PolInSAR-based forest height estimation. This multi-source forest height estimation is reformulated as a pan-sharpening task aiming to generate forest heights with high spatial resolution and vertical accuracy based on the synergy of the sparse LiDAR-derived heights and the information embedded in the PolInSAR data. This process is realized by a specifically designed generative adversarial network (GAN) allowing high accuracy forest height estimation less limited by theoretical models and system parameters. Related experiments are carried out over a boreal and a tropical forest to validate the flexibility of the method. An automated active fire detection framework is proposed for the medium resolution multispectral remote sensing data. The basic part of this framework is a deep-learning-based semantic segmentation model specifically designed for active fire detection. A dataset is constructed with open-access Sentinel-2 imagery for the training and testing of the deep-learning model. The developed framework allows an automated Sentinel-2 data download, processing, and generation of the active fire detection results through time and location information provided by the user. Related performance is evaluated in terms of detection accuracy and processing efficiency. The last part of this thesis explored whether the coarse burned area products can be further improved through the synergy of multispectral, SAR, and InSAR features with higher spatial resolutions. A Siamese Self-Attention (SSA) classification is proposed for the multi-sensor burned area mapping and a multi-source dataset is constructed at the object level for the training and testing. Results are analyzed by different test sites, feature sources, and classification methods to assess the improvements achieved by the proposed method. All developed methods are validated with extensive processing of multi-source data acquired by Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), Land, Vegetation, and Ice Sensor (LVIS), PolSARproSim+, Sentinel-1, and Sentinel-2. I hope these studies constitute a substantial contribution to the forest applications of multi-source remote sensing

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Remote sensing technology applications in forestry and REDD+

    Get PDF
    Advances in close-range and remote sensing technologies are driving innovations in forest resource assessments and monitoring on varying scales. Data acquired with airborne and spaceborne platforms provide high(er) spatial resolution, more frequent coverage, and more spectral information. Recent developments in ground-based sensors have advanced 3D measurements, low-cost permanent systems, and community-based monitoring of forests. The UNFCCC REDD+ mechanism has advanced the remote sensing community and the development of forest geospatial products that can be used by countries for the international reporting and national forest monitoring. However, an urgent need remains to better understand the options and limitations of remote and close-range sensing techniques in the field of forest degradation and forest change. Therefore, we invite scientists working on remote sensing technologies, close-range sensing, and field data to contribute to this Special Issue. Topics of interest include: (1) novel remote sensing applications that can meet the needs of forest resource information and REDD+ MRV, (2) case studies of applying remote sensing data for REDD+ MRV, (3) timeseries algorithms and methodologies for forest resource assessment on different spatial scales varying from the tree to the national level, and (4) novel close-range sensing applications that can support sustainable forestry and REDD+ MRV. We particularly welcome submissions on data fusion

    Advanced machine learning algorithms for Canadian wetland mapping using polarimetric synthetic aperture radar (PolSAR) and optical imagery

    Get PDF
    Wetlands are complex land cover ecosystems that represent a wide range of biophysical conditions. They are one of the most productive ecosystems and provide several important environmental functionalities. As such, wetland mapping and monitoring using cost- and time-efficient approaches are of great interest for sustainable management and resource assessment. In this regard, satellite remote sensing data are greatly beneficial, as they capture a synoptic and multi-temporal view of landscapes. The ability to extract useful information from satellite imagery greatly affects the accuracy and reliability of the final products. This is of particular concern for mapping complex land cover ecosystems, such as wetlands, where complex, heterogeneous, and fragmented landscape results in similar backscatter/spectral signatures of land cover classes in satellite images. Accordingly, the overarching purpose of this thesis is to contribute to existing methodologies of wetland classification by proposing and developing several new techniques based on advanced remote sensing tools and optical and Synthetic Aperture Radar (SAR) imagery. Specifically, the importance of employing an efficient speckle reduction method for polarimetric SAR (PolSAR) image processing is discussed and a new speckle reduction technique is proposed. Two novel techniques are also introduced for improving the accuracy of wetland classification. In particular, a new hierarchical classification algorithm using multi-frequency SAR data is proposed that discriminates wetland classes in three steps depending on their complexity and similarity. The experimental results reveal that the proposed method is advantageous for mapping complex land cover ecosystems compared to single stream classification approaches, which have been extensively used in the literature. Furthermore, a new feature weighting approach is proposed based on the statistical and physical characteristics of PolSAR data to improve the discrimination capability of input features prior to incorporating them into the classification scheme. This study also demonstrates the transferability of existing classification algorithms, which have been developed based on RADARSAT-2 imagery, to compact polarimetry SAR data that will be collected by the upcoming RADARSAT Constellation Mission (RCM). The capability of several well-known deep Convolutional Neural Network (CNN) architectures currently employed in computer vision is first introduced in this thesis for classification of wetland complexes using multispectral remote sensing data. Finally, this research results in the first provincial-scale wetland inventory maps of Newfoundland and Labrador using the Google Earth Engine (GEE) cloud computing resources and open access Earth Observation (EO) collected by the Copernicus Sentinel missions. Overall, the methodologies proposed in this thesis address fundamental limitations/challenges of wetland mapping using remote sensing data, which have been ignored in the literature. These challenges include the backscattering/spectrally similar signature of wetland classes, insufficient classification accuracy of wetland classes, and limitations of wetland mapping on large scales. In addition to the capabilities of the proposed methods for mapping wetland complexes, the use of these developed techniques for classifying other complex land cover types beyond wetlands, such as sea ice and crop ecosystems, offers a potential avenue for further research

    Cost-Sensitive Learning-based Methods for Imbalanced Classification Problems with Applications

    Get PDF
    Analysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties create bias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive learning method (CSL) to deal with the classification of imperfect data. Typically, most traditional approaches for classification demonstrate poor performance in an environment with imperfect data. We propose the use of CSL with Support Vector Machine, which is a well-known data mining algorithm. The results reveal that the proposed algorithm produces more accurate classifiers and is more robust with respect to imperfect data. Furthermore, we explore the best performance measures to tackle imperfect data along with addressing real problems in quality control and business analytics

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Quantifying aboveground grass biomass using space-borne sensors : a meta-analysis and systematic review

    Get PDF
    DATA AVAILABILITY STATEMENT : The secondary data used in this study is available in the following databases: WoS, IEEE Explorer, Scopus, and Google scholar.SUPPLEMENTARY MATERIAL : TABLE S1: The 108 articles extracted from WoS, IEEE, Scopus and Google scholar.Recently, the move from cost-tied to open-access data has led to the mushrooming of research in pursuit of algorithms for estimating the aboveground grass biomass (AGGB). Nevertheless, a comprehensive synthesis or direction on the milestones achieved or an overview of how these models perform is lacking. This study synthesises the research from decades of experiments in order to point researchers in the direction of what was achieved, the challenges faced, as well as how the models perform. A pool of findings from 108 remote sensing-based AGGB studies published from 1972 to 2020 show that about 19% of the remote sensing-based algorithms were tested in the savannah grasslands. An uneven annual publication yield was observed with approximately 36% of the research output from Asia, whereas countries in the global south yielded few publications (<10%). Optical sensors, particularly MODIS, remain a major source of satellite data for AGGB studies, whilst studies in the global south rarely use active sensors such as Sentinel-1. Optical data tend to produce low regression accuracies that are highly inconsistent across the studies compared to radar. The vegetation indices, particularly the Normalised Difference Vegetation Index (NDVI), remain as the most frequently used predictor variable. The predictor variables such as the sward height, red edge position and backscatter coefficients produced consistent accuracies. Deciding on the optimal algorithm for estimating the AGGB is daunting due to the lack of overlap in the grassland type, location, sensor types, and predictor variables, signalling the need for standardised remote sensing techniques, including data collection methods to ensure the transferability of remote sensing-based AGGB models across multiple locations.The Agricultural Research Council (ARC) and the National Research Foundation (NRF) Research Chair in Land Use Planning and Management.https://doaj.org/toc/2673-7418hj2024Geography, Geoinformatics and MeteorologySDG-15:Life on lan

    Modeling of Subsurface Scattering from Ice Sheets for Pol-InSAR Applications

    Get PDF
    Remote sensing is a fundamental tool to measure the dynamics of ice sheets and provides valuable information for ice sheet projections under a changing climate. There is, however, the potential to further reduce the uncertainties in these projections by developing innovative remote sensing methods. One of these remote sensing techniques, the polarimetric synthetic aperture radar interferometry (Pol-InSAR), is known since decades to have the potential to assess the geophysical properties below the surface of ice sheets, because of the penetration of microwave signals into dry snow, firn, and ice. Despite this, only very few studies have addressed this topic and the development of robust Pol-InSAR applications is at an early stage. Two potential Pol-InSAR applications are identified as the motivation for this thesis. First, the estimation and compensation of the penetration bias in digital elevation models derived with SAR interferometry. This bias can lead to errors of several meters or even tens of meters in surface elevation measurements. Second, the estimation of geophysical properties of the subsurface of glaciers and ice sheets using Pol-InSAR techniques. There is indeed potential to derive information about melt-refreeze processes within the firn, which are related to density and affect the mass balance. Such Pol-InSAR applications can be a valuable information source with the potential for monthly ice sheet wide coverage and high spatial resolution provided by the next generation of SAR satellites. However, the required models to link the Pol-InSAR measurements to the subsurface properties are not yet established. The aim of this thesis is to improve the modeling of the vertical backscattering distribution in the subsurface of ice sheets and its effect on polarimetric interferometric SAR measurements at different frequencies. In order to achieve this, polarimetric interferometric multi-baseline SAR data at different frequencies and from two different test sites on the Greenland ice sheet are investigated. This thesis contributes with three concepts to a better understanding and to a more accurate modeling of the vertical backscattering distribution in the subsurface of ice sheets. First, the integration of scattering from distinct subsurface layers. These are formed by refrozen melt water in the upper percolation zone and cause an interesting coherence undulation pattern, which cannot be explained with previously existing models. This represents a first link between Pol-InSAR data and geophysical subsurface properties. The second step is the improved modeling of the general vertical backscattering distribution of the subsurface volume. The advantages of more flexible volume models are demonstrated, but interestingly, the simple modification of a previously existing model with a vertical shift parameter lead to the best agreement between model and data. The third contribution is the model based compensation of the penetration bias, which is experimentally validated. At the investigated test sites, it becomes evident that the model based estimates of the surface elevations are more accurate than the interferometric phase center locations, which are conventionally used to derive surface elevations of ice sheets. This thesis therefore improves the state of the art of subsurface scattering modeling for Pol-InSAR applications, demonstrates the model-based penetration bias compensation, and makes a further research step towards the retrieval of geophysical subsurface information with Pol-InSAR

    Using a VNIR Spectral Library to Model Soil Carbon and Total Nitrogen Content

    Get PDF
    n-situ soil sensor systems based on visible and near infrared spectroscopy is not yet been effectively used due to inadequate studies to utilize legacy spectral libraries under the field conditions. The performance of such systems is significantly affected by spectral discrepancies created by sample intactness and library differences. In this study, four objectives were devised to obtain directives to address these issues. The first objective was to calibrate and evaluate VNIR models statistically and computationally (i.e. computing resource requirement), using four modeling techniques namely: Partial least squares regression (PLS), Artificial neural networks (ANN), Random forests (RF) and Support vector regression (SVR), to predict soil carbon and nitrogen contents for the Rapid Carbon Assessment (RaCA) project. The second objective was to investigate whether VNIR modeling accuracy can be improved by sample stratification. The third objective was to evaluate the usefulness of these calibrated models to predict external soil samples. The final objective was devised to compare four calibration transfer techniques: Direct Standardization (DS), Piecewise Direct Standardization (PDS), External Parameter Orthogonalization (EPO) and spiking, to transfer field sample scans to laboratory scans of dry ground samples. Results showed that non-linear modeling techniques (ANN, RF and SVR) significantly outperform linear modeling technique (PLS) for all soil properties investigated (accuracy of PLS \u3c RF \u3c SVR ≤ ANN). Local models developed using the four auxiliary variables (Region, land use/land cover class, master horizon and textural class) improved the prediction for all properties (especially for PLS models) compared to the global models (in terms of Root Mean Squared Error of Prediction) with master horizon models outperforming other local models. From the calibration transfer study, it was evident that all the calibration transfer techniques (except for DS) can correct for spectral influences caused by sample intactness. EPO and spiking coupled with ANN model calibration showed the highest performance in accounting for the intactness of samples. These findings will be helpful for future efforts in linking legacy spectra to field spectra for successful implementation of the VNIR sensor systems for vertical or horizontal soil characterization. Advisor Yufeng G
    corecore