11 research outputs found

    Uncertainty Assessment of Hyperspectral Image Classification: Deep Learning vs. Random Forest

    Get PDF
    Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test data but also incapable of capturing the spatial variation in classification error. Here, we apply and compare two uncertainty assessment techniques that do not rely on test data availability and enable the spatial characterisation of classification accuracy before the validation phase, promoting the assessment of error propagation within the classified imagery products. We compared the performance of emerging deep neural network (DNN) with the popular random forest (RF) technique. Uncertainty assessment was implemented by calculating the Shannon entropy of class probabilities predicted by DNN and RF for every pixel. The classification uncertainties of DNN and RF were quantified for two different hyperspectral image datasets—Salinas and Indian Pines. We then compared the uncertainty against the classification accuracy of the techniques represented by a modified root mean square error (RMSE). The results indicate that considering modified RMSE values for various sample sizes of both datasets, the derived entropy based on the DNN algorithm is a better estimate of classification accuracy and hence provides a superior uncertainty estimate at the pixel level

    Fuzzy Shannon Entropy: A Hybrid GIS-Based Landslide Susceptibility Mapping Method

    No full text
    Assessing Landslide Susceptibility Mapping (LSM) contributes to reducing the risk of living with landslides. Handling the vagueness associated with LSM is a challenging task. Here we show the application of hybrid GIS-based LSM. The hybrid approach embraces fuzzy membership functions (FMFs) in combination with Shannon entropy, a well-known information theory-based method. Nine landslide-related criteria, along with an inventory of landslides containing 108 recent and historic landslide points, are used to prepare a susceptibility map. A random split into training (≈70%) and testing (≈30%) samples are used for training and validation of the LSM model. The study area—Izeh—is located in the Khuzestan province of Iran, a highly susceptible landslide zone. The performance of the hybrid method is evaluated using receiver operating characteristics (ROC) curves in combination with area under the curve (AUC). The performance of the proposed hybrid method with AUC of 0.934 is superior to multi-criteria evaluation approaches using a subjective scheme in this research in comparison with a previous study using the same dataset through extended fuzzy multi-criteria evaluation with AUC value of 0.894, and was built on the basis of decision makers’ evaluation in the same study area

    Towards automatic calibration of neighbourhood influence in cellular automata land-use models

    Full text link
    © 2019 The Authors Cellular Automata (CA) land-use models are widely used for understanding processes of land-use change. However, calibration of these models is a knowledge-intensive and time-consuming process. Although calibration of common driving factors such as accessibility (A), or suitability (S) is a relatively straightforward task, calibrating the neighbourhood dynamics (N), which controls the key model behaviour, is often very challenging. Here, building on the SIMLANDER modelling framework, we develop an automatic rule detection (ARD) procedure to automate the calibration of N. To demonstrate the performance of the tool, we simulated 15 years of urban growth in Ahvaz, Iran (2000–2015) using a wide range of different rule-sets. We evaluated calibration goodness-of-fit for each rule-set against a reference map by means of cross-comparison of multiple metrics using a ranking procedure. The ARD procedure can be implemented in two ways: 1) by random sampling of the parameter space, a user-defined number of times, or 2) through a stepwise “grid search” approach for a user-defined number of rule combinations. Both approaches were found to produce successful rule combinations according to the goodness-of-fit metrics applied. Grid search performed slightly better, but at the cost of a fivefold increase in computation time. The ARD approach facilitates model calibration by allowing rapid identification of the optimum ruleset from a wide range of possible parameter settings, while the ranking procedure facilitates comparison of simulations using multiple metrics. The approach we present also helps to improve simulation accuracy with respect to manual calibration methods, and increases understanding of neighbourhood dynamics for the urban area studied. To encourage repeatability and transparency, our approach uses only open data and Free-and-Open Source Software (RStudio environment) and we provide our ARD scripts as supplementary materia

    A robust rule-based ensemble framework using mean-shift segmentation for hyperspectral image classification

    Full text link
    This paper assesses the performance of DoTRules—a dictionary of trusted rules—as a supervised rule-based ensemble framework based on the mean-shift segmentation for hyperspectral image classification. The proposed ensemble framework consists of multiple rule sets with rules constructed based on different class frequencies and sequences of occurrences. Shannon entropy was derived for assessing the uncertainty of every rule and the subsequent filtering of unreliable rules. DoTRules is not only a transparent approach for image classification but also a tool to map rule uncertainty, where rule uncertainty assessment can be applied as an estimate of classification accuracy prior to image classification. In this research, the proposed image classification framework is implemented using three world reference hyperspectral image datasets. We found that the overall accuracy of classification using the proposed ensemble framework was superior to state-of-the-art ensemble algorithms, as well as two non-ensemble algorithms, at multiple training sample sizes. We believe DoTRules can be applied more generally to the classification of discrete data such as hyperspectral satellite imagery products

    Landslide susceptibility assessment at the Wuning area, China: a comparison between multi-criteria decision making, bivariate statistical and machine learning methods

    No full text
    The aim of this research is to investigate multi-criteria decision making [spatial multi-criteria evaluation (SMCE)], bivariate statistical methods [frequency ratio (FR), index of entropy (IOE), weighted linear combination (WLC)] and machine learning [support vector machine (SVM)] models for estimating landslide susceptibility at the Wuning area, China. A total of 445 landslides were randomly classified into 70% and 30% to train and validate landslide models, respectively. Fourteen landslide conditioning factors including slope angle, slope aspect, altitude, topographic wetness index, stream power index, sediment transport index, soil, lithology, NDVI, land use, rainfall, distance to road, distance to river and distance to fault were then studied for landslide susceptibility assessment. Performances of five studied models were evaluated using area under the ROC curve (AUROC) for training (success rate curve) and validation (prediction rate curve) datasets, statistical-based measures and tests. Results indicated that the area under the success rate curve for the FR, IOE, WLC, SVM and SMCE models was 88.32%, 82.58%, 78.91%, 85.47% and 89.96%, respectively, demonstrating that SMCE could provide the higher accuracy. The prediction capability findings revealed that the SMCE model (AUC = 86.81%) was also the highest approach among the five studied models, followed by the FR (AUC = 84.53%), the SVM (AUC = 81.24%), the IOE (AUC = 79.67%) and WLC (73.92%) methods. The landslide susceptibility maps derived from the above five models are reasonably accurate and could be used to perform elementary land use planning for hazard extenuation
    corecore