18 research outputs found

    Doing more with less: A comparative assessment between morphometric indices and machine learning models for automated gully pattern extraction (A case study: Dashtiari region, Sistan and Baluchestan Province)

    No full text
    Deep gullies in the Dashtiari Region prompted us to couple different morphometric indices obtained from a UAV-derived DEM to automatically extract gully signatures. The extraction of gully signatures is commonly undertaken via pattern recognition techniques, whose recent advancements seem to require more data and rather cumbersome modeling processes, making it even more difficult for those who are not well-versed in such contexts. Among these methods, object-based image analysis (OBIA), machine learning, and deep learning techniques are the most common. Conversely, here we took advantage of simple morphometric indices and their combinations for gully extraction, including valley depth (VD), topographic position index (TPI), positive openness (PO), red relief image map (RRIM), elevation, slope degree, and the coupled PO-DEM. Furthermore, we compared the automatically derived gully patterns to the manually extracted ones (treated as the ground truth), and their spatial autocorrelation was investigated. Additionally, the application of the classification tree (CT) as a powerful machine learning model was comparatively assessed for morphometric indices. The performance of the adopted pattern extraction techniques was estimated using four different metrics: precision index, true skill statistics (TSS), Cohen's kappa, and Matthews correlation coefficient (MCC). The results revealed that the single use of PO, TPI, and RRIM indices failed to reliably capture the gullies’ pattern, leading to partial success. Notably, combinations of indices showed that the coupled PO-DEM could successfully classify the gully presence locations from the absences and outperform the CT model in terms of both goodness-of-fit and generalization capacity (prediction power), considering all four-performance metrics. Hence, comparing the amount of time spent for manual delineation of gullies, the application of simple morphometric indices, and machine learning models is beyond comparison

    Urban flood hazard modeling using self-organizing map neural network

    No full text
    Abstract Floods are the most common natural disaster globally and lead to severe damage, especially in urban environments. This study evaluated the efficiency of a self-organizing map neural network (SOMN) algorithm for urban flood hazard mapping in the case of Amol city, Iran. First, a flood inventory database was prepared using field survey data covering 118 flooded points. A 70:30 data ratio was applied for training and validation purposes. Six factors (elevation, slope percent, distance from river, distance from channel, curve number, and precipitation) were selected as predictor variables. After building the model, the odds ratio skill score (ORSS), efficiency (E), true skill statistic (TSS), and the area under the receiver operating characteristic curve (AUC-ROC) were used as evaluation metrics to scrutinize the goodness-of-fit and predictive performance of the model. The results indicated that the SOMN model performed excellently in modeling flood hazard in both the training (AUC = 0.946, E = 0.849, TSS = 0.716, ORSS = 0.954) and validation (AUC = 0.924, E = 0.857, TSS = 0.714, ORSS = 0.945) steps. The model identified around 23% of the Amol city area as being in high or very high flood risk classes that need to be carefully managed. Overall, the results demonstrate that the SOMN model can be used for flood hazard mapping in urban environments and can provide valuable insights about flood risk management

    Landslide susceptibility assessment in the Anfu County, China: comparing different statistical and probabilistic models considering the new topo-hydrological factor (HAND)

    No full text
    The present study is aimed at producing landslide susceptibility map of a landslide-prone area (Anfu County, China) by using evidential belief function (EBF), frequency ratio (FR) and Mahalanobis distance (MD) models. To this aim, 302 landslides were mapped based on earlier reports and aerial photographs, as well as, carrying out several field surveys. The landslide inventory was randomly split into a training dataset (70%; 212landslides) for training the models and the remaining (30%; 90 landslides) was cast off for validation purpose. A total of sixteen geo-environmental conditioning factors were considered as inputs to the models: slope degree, slope aspect, plan curvature, profile curvature, the new topo-hydrological factor termed height above the nearest drainage (HAND), average annual rainfall, altitude, distance from rivers, distance from roads, distance from faults, lithology, normalized difference vegetation index (NDVI), sediment transport index (STI), stream power index (SPI), soil texture, and land use/cover. The validation of susceptibility maps was evaluated using the area under the receiver operating characteristic curve (AUROC). As a results, the FR outperformed other models with an AUROC of 84.98%, followed by EBF (78.63%) and MD (78.50%) models. The percentage of susceptibility classes for each model revealed that MD model managed to build a compendious map focused at highly susceptible areas (high and very high classes) with an overall area of approximately 17%, followed by FR (22.76%) and EBF (31%). The premier model (FR) attested that the five factors mostly influenced the landslide occurrence in the area: NDVI, soil texture, slope degree, altitude, and HAND. Interestingly, HAND could manifest clearer pattern with regard to landslide occurrence compared to other topo-hydrological factors such as SPI, STI, and distance to rivers. Lastly, it can be conceived that the susceptibility of the area to landsliding is more subjected to a complex environmental set of factors rather than anthropological ones (residential areas and distance to roads). This upshot can make a platform for further pragmatic measures regarding hazard-planning actions

    Hybridized neural fuzzy ensembles for dust source modeling and prediction

    Full text link
    © 2020 Dust storms are believed to play an essential role in many climatological, geochemical, and environmental processes. This atmospheric phenomenon can have a significant negative impact on public health and significantly disturb natural ecosystems. Identifying dust-source areas is thus a fundamental task to control the effects of this hazard. This study is the first attempt to identify dust source areas using hybridized machine-learning algorithms. Each hybridized model, designed as an intelligent system, consists of an adaptive neuro-fuzzy inference system (ANFIS), integrated with a combination of metaheuristic optimization algorithms: the bat algorithm (BA), cultural algorithm (CA), and differential evolution (DE). The data acquired from two key sources – the Moderate Resolution Imaging Spectroradiometer (MODIS) Deep Blue and the Ozone Monitoring Instrument (OMI) – are incorporated into the hybridized model, along with relevant data from field surveys and dust samples. Goodness-of-fit analyses are performed to evaluate the predictive capability of the hybridized models using different statistical criteria, including the true skill statistic (TSS) and the area under the receiver operating characteristic curve (AUC). The results demonstrate that the hybridized ANFIS-DE model (with AUC = 84.1%, TSS = 0.73) outperforms the other comparative hybridized models tailored for dust-storm prediction. The results provide evidence that the hybridized ANFIS-DE model should be explored as a promising, cost-effective method for efficiently identifying the dust-source areas, with benefits for both public health and natural environments where excessive dust presents significant challenges

    Development of novel hybridized models for urban flood susceptibility mapping

    No full text
    Abstract Floods in urban environments often result in loss of life and destruction of property, with many negative socio-economic effects. However, the application of most flood prediction models still remains challenging due to data scarcity. This creates a need to develop novel hybridized models based on historical urban flood events, using, e.g., metaheuristic optimization algorithms and wavelet analysis. The hybridized models examined in this study (Wavelet-SVR-Bat and Wavelet-SVR-GWO), designed as intelligent systems, consist of a support vector regression (SVR), integrated with a combination of wavelet transform and metaheuristic optimization algorithms, including the grey wolf optimizer (GWO), and the bat optimizer (Bat). The efficiency of the novel hybridized and standalone SVR models for spatial modeling of urban flood inundation was evaluated using different cutoff-dependent and cutoff-independent evaluation criteria, including area under the receiver operating characteristic curve (AUC), Accuracy (A), Matthews Correlation Coefficient (MCC), Misclassification Rate (MR), and F-score. The results demonstrated that both hybridized models had very high performance (Wavelet-SVR-GWO: AUC = 0.981, A = 0.92, MCC = 0.86, MR = 0.07; Wavelet-SVR-Bat: AUC = 0.972, A = 0.88, MCC = 0.76, MR = 0.11) compared with the standalone SVR (AUC = 0.917, A = 0.85, MCC = 0.7, MR = 0.15). Therefore, these hybridized models are a promising, cost-effective method for spatial modeling of urban flood susceptibility and for providing in-depth insights to guide flood preparedness and emergency response services

    Concepts for improving machine learning based landslide assessment

    No full text
    The main idea of this chapter is to address some of the key issues that were recognized in Machine Learning (ML) based Landslide Assessment Modeling (LAM). Through the experience of the authors, elaborated in several case studies, including the City of Belgrade in Serbia, the City of Tuzla in Bosnia and Herzegovina, Ljubovija Municipality in Serbia, and Halenkovice area in Czech Republic, eight key issues were identified, and appropriate options, solutions, and some new concepts for overcoming them were introduced. The following issues were addressed: Landslide inventory enhancements (overcoming small number of landslide instances), Choice of attributes (which attributes are appropriate and pros and cons on attribute selection/extraction), Classification versus regression (which type of task is more appropriate in particular cases), Choice of ML technique (discussion of most popular ML techniques), Sampling strategy (overcoming the overfit by choosing training instances wisely), Cross-scaling (a new concept for improving the algorithm’s learning capacity), Quasi-hazard concept (introducing artificial temporal base for upgrading from susceptibility to hazard assessment), and Objective model evaluation (the best practice for validating resulting models against the existing inventory). All of them are followed by appropriate practical examples from one of abovementioned case studies. The ultimate objective is to provide guidance and inspire LAM community for a more innovative approach in modeling
    corecore