3 research outputs found

    Urban flood hazard modeling using self-organizing map neural network

    No full text
    Abstract Floods are the most common natural disaster globally and lead to severe damage, especially in urban environments. This study evaluated the efficiency of a self-organizing map neural network (SOMN) algorithm for urban flood hazard mapping in the case of Amol city, Iran. First, a flood inventory database was prepared using field survey data covering 118 flooded points. A 70:30 data ratio was applied for training and validation purposes. Six factors (elevation, slope percent, distance from river, distance from channel, curve number, and precipitation) were selected as predictor variables. After building the model, the odds ratio skill score (ORSS), efficiency (E), true skill statistic (TSS), and the area under the receiver operating characteristic curve (AUC-ROC) were used as evaluation metrics to scrutinize the goodness-of-fit and predictive performance of the model. The results indicated that the SOMN model performed excellently in modeling flood hazard in both the training (AUC = 0.946, E = 0.849, TSS = 0.716, ORSS = 0.954) and validation (AUC = 0.924, E = 0.857, TSS = 0.714, ORSS = 0.945) steps. The model identified around 23% of the Amol city area as being in high or very high flood risk classes that need to be carefully managed. Overall, the results demonstrate that the SOMN model can be used for flood hazard mapping in urban environments and can provide valuable insights about flood risk management

    The effect of sample size on different machine learning models for groundwater potential mapping in mountain bedrock aquifers

    No full text
    Abstract Machine learning models have attracted much research attention for groundwater potential mapping. However, the accuracy of models for groundwater potential mapping is significantly influenced by sample size and this is still a challenge. This study evaluates the influence of sample size on the accuracy of different individual and hybrid models, adaptive neuro-fuzzy inference system (ANFIS), ANFIS-imperial competitive algorithm (ANFIS-ICA), alternating decision tree (ADT), and random forest (RF) to model groundwater potential, considering the number of springs from 177 to 714. A well-documented inventory of springs, as a natural representative of groundwater potential, was used to designate four sample data sets: 100% (D₁), 75% (D₂), 50% (D₃), and 25% (D₄) of the entire springs inventory. Each data set was randomly split into two groups of 30% (for training) and 70% (for validation). Fifteen diverse geo-environmental factors were employed as independent variables. The area under the operating receiver characteristic curve (AUROC) and the true skill statistic (TSS) as two cutoff-independent and cutoff-dependent performance metrics were used to assess the performance of models. Results showed that the sample size influenced the performance of four machine learning algorithms, but RF had a lower sensitivity to the reduction of sample size. In addition, validation results revealed that RF (AUROC = 90.74–96.32%, TSS = 0.79–0.85) had the best performance based on all four sample data sets, followed by ANFIS-ICA (AUROC = 81.23–91.55%, TSS = 0.74–0.81), ADT (AUROC = 79.29–88.46%, TSS = 0.59–0.74), and ANFIS (AUROC = 73.11–88.43%, TSS = 0.59–0.74). Further, the relative slope position, lithology, and distance from faults were the main spring-affecting factors contributing to groundwater potential modelling. This study can provide useful guidelines and a valuable reference for selecting machine learning models when a complete spring inventory in a watershed is unavailable

    TET:an automated tool for evaluating suitable check-dam sites based on sediment trapping efficiency

    No full text
    Abstract Sediment control is important for supplying clean water. Although check dams control sediment yield, site selection for check dams based on the sediment trapping efficiency (TE) is often complex and time-consuming. Currently, a multi-step trial-and-error process is used to find the optimal sediment TE for check dam construction, which limits this approach in practice. To cope with this challenge, we developed a user-friendly, cost- and time-efficient geographic information system (GIS)-based tool, the trap efficiency tool (TET), in the Python programming language. We applied the tool to two watersheds, the Hableh-Rud and the Poldokhtar, in Iran. To identify suitable sites for check dams, four scenarios (S1: TE ≥ 60%, S2: TE ≥ 70%, S3: TE ≥ 80%, S4: TE ≥ 90%) were tested. TET identified 189, 117, 96, and 77 suitable sites for building check dams in S1, S2, S3, and S4, respectively, in the Hableh-Rud watershed, and 346, 204, 156, and 60 sites in S1, S2, S3, and S4, respectively, in the Poldokhtar watershed. Evaluation of 136 existing check dams in the Hableh-Rud watershed indicated that only 10% and 5% were well-located and these were in the TE classes of 80–90% and ≥90%, respectively. In the Poldokhtar watershed, only 11% and 8% of the 207 existing check dams fell into TE classes 80–90% and ≥90%, respectively. Thus, the conventional approach for locating suitable sites at which check dams should be constructed is not effective at reaching suitable sediment control efficiency. Importantly, TET provides valuable insights for site selection of check dams and can help decision makers avoid monetary losses incurred by inefficient check-dam performance
    corecore