30 research outputs found

    Morphological parameters causing landslides: A case study of elevation

    Get PDF
    The history of landslide susceptibility maps goes back about 50 years. Hazard and risk maps later followed these maps. Inventory maps provide the source of all these. There are different parameters selected specially for each field in the literature as well as parameters selected because they are easy to produce and obtain data. This study tried to research the effect of elevation on landslides by reviewing the literature in detail. The used class ranges and elevation values were reviewed and applied to map sections selected from Turkey. By analyzing the results, the goal was to determine at which elevation ranges landslides occurred. The study tried to investigate the effect of the parameter of elevation using data from the literature. It works to compare the elevation values for map sections selected to compare with the literature. The study comprises two stages. The first step tried to acquire statistical data by researching the data from the literature. The data were investigated in the second stage. For this purpose, close to 1.500 studies prepared between 1967 and 2019 were reviewed. According to the literature, the parameter of was used in analyses because it is easy to produce and is morphologically effective

    Systematic sample subdividing strategy for training landslide susceptibility models

    Full text link
    © 2019 Elsevier B.V. Current practice in choosing training samples for landslide susceptibility modelling (LSM) is to randomly subdivide inventory information into training and testing samples. Where inventory data differ in distribution, the selection of training samples by a random process may cause inefficient training of machine learning (ML)/statistical models. A systematic technique may, however, produce efficient training samples that well represent the entire inventory data. This is particularly true when inventory information is scarce. This research proposed a systemic strategy to deal with this problem based on the fundamental distribution of probabilities (i.e. Hellinger) and a novel graphical representation of information contained in inventory data (i.e. inventory information curve, IIC). This graphical representation illustrates the relative increase in available information with the growth of the training sample size. Experiments on a selected dataset over the Cameron Highlands, Malaysia were conducted to validate the proposed methods. The dataset contained 104 landslide inventories and 7 landslide-conditioning factors (i.e. altitude, slope, aspect, land use, distance from the stream, distance from the road and distance from lineament) derived from a LiDAR-based digital elevation model and thematic maps acquired from government authorities. In addition, three ML/statistical models, namely, k-nearest neighbour (KNN), support vector machine (SVM) and decision tree (DT), were utilised to assess the proposed sampling strategy for LSM. The impacts of model's hyperparameters, noise and outliers on the performance of the models and the shape of IICs were also investigated and discussed. To evaluate the proposed method further, it was compared with other standard methods such as random sampling (RS), stratified RS (SRS) and cross-validation (CV). The evaluations were based on the area under the receiving characteristic curves. The results show that IICs are useful in explaining the information content in the training subset and their differences from the original inventory datasets. The quantitative evaluation with KNN, SVM and DT shows that the proposed method outperforms the RS and SRS in all the models and the CV method in KNN and DT models. The proposed sampling strategy enables new applications in landslide modelling, such as measuring inventory data content and complexity and selecting effective training samples to improve the predictive capability of landslide susceptibility models

    Spatial prediction of rotational landslide using geographically weighted regression, logistic regression, and support vector machine models in Xing Guo area (China)

    Full text link
    © 2017 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This study evaluated the geographically weighted regression (GWR) model for landslide susceptibility mapping in Xing Guo County, China. In this study, 16 conditioning factors, such as slope, aspect, altitude, topographic wetness index, stream power index, sediment transport index, soil, lithology, normalized difference vegetation index (NDVI), landuse, rainfall, distance to road, distance to river, distance to fault, plan curvature, and profile curvature, were analyzed. Chi-square feature selection method was adopted to compare the significance of each factor with landslide occurence. The GWR model was compared with two well-known models, namely, logistic regression (LR) and support vcector machine (SVM). Results of chi-square feature selection indicated that lithology and slope are the most influencial factors, whereas SPI was found statistically insignificant. Four landslide susceptibility maps were generated by GWR, SGD-LR, SGD-SVM, and SVM models. The GWR model exhibited the highest performance in terms of success rate and prediction accuracy, with values of 0.789 and 0.819, respectively. The SVM model exhibited slightly lower AUC values than that of the GWR model. Validation result of the four models indicates that GWR is a better model than other widely used models

    Urban tree classification using discrete-return LiDAR and an object-level local binary pattern algorithm

    Full text link
    © 2019, © 2019 Informa UK Limited, trading as Taylor & Francis Group. Urban trees have the potential to mitigate some of the harm brought about by rapid urbanization and population growth, as well as serious environmental degradation (e.g. soil erosion, carbon pollution and species extirpation), in cities. This paper presents a novel urban tree extraction modelling approach that uses discrete laser scanning point clouds and object-based textural analysis to (1) develop a model characterised by four sub-models, including (a) height-based split segmentation, (b) feature extraction, (c) texture analysis and (d) classification, and (2) apply this model to classify urban trees. The canopy height model is integrated with the object-level local binary pattern algorithm (LBP) to achieve high classification accuracy. The results of each sub-model reveal that the classification of urban trees based on the height at 47.14 (high) and 2.12 m (low), respectively, while based on crown widths were highest and lowest at 22.5 and 2.55 m, respectively. Results also indicate that the proposed algorithm of urban tree modelling is effective for practical use

    Evaluating the variations in the flood susceptibility maps accuracies due to the alterations in the type and extend of the flood inventory

    Get PDF
    This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling

    Landslide detection using multi-scale image segmentation and different machine learning models in the higher Himalayas

    Get PDF
    Landslides represent a severe hazard in many areas of the world. Accurate landslide maps are needed to document the occurrence and extent of landslides and to investigate their distribution, types, and the pattern of slope failures. Landslide maps are also crucial for determining landslide susceptibility and risk. Satellite data have been widely used for such investigations—next to data from airborne or unmanned aerial vehicle (UAV)-borne campaigns and Digital Elevation Models (DEMs). We have developed a methodology that incorporates object-based image analysis (OBIA) with three machine learning (ML) methods, namely, the multilayer perceptron neural network (MLP-NN) and random forest (RF), for landslide detection. We identified the optimal scale parameters (SP) and used them for multi-scale segmentation and further analysis. We evaluated the resulting objects using the object pureness index (OPI), object matching index (OMI), and object fitness index (OFI) measures. We then applied two different methods to optimize the landslide detection task: (a) an ensemble method of stacking that combines the different ML methods for improving the performance, and (b) Dempster–Shafer theory (DST), to combine the multi-scale segmentation and classification results. Through the combination of three ML methods and the multi-scale approach, the framework enhanced landslide detection when it was tested for detecting earthquake-triggered landslides in Rasuwa district, Nepal. PlanetScope optical satellite images and a DEM were used, along with the derived landslide conditioning factors. Different accuracy assessment measures were used to compare the results against a field-based landslide inventory. All ML methods yielded the highest overall accuracies ranging from 83.3% to 87.2% when using objects with the optimal SP compared to other SPs. However, applying DST to combine the multi-scale results of each ML method significantly increased the overall accuracies to almost 90%. Overall, the integration of OBIA with ML methods resulted in appropriate landslide detections, but using the optimal SP and ML method is crucial for success

    Landslide susceptibility mapping using remote sensing data and geographic information system-based algorithms

    Get PDF
    Whether they occur due to natural triggers or human activities, landslides lead to loss of life and damages to properties which impact infrastructures, road networks and buildings. Landslide Susceptibility Map (LSM) provides the policy and decision makers with some valuable information. This study aims to detect landslide locations by using Sentinel-1 data, the only freely available online Radar imagery, and to map areas prone to landslide using a novel algorithm of AB-ADTree in Cameron Highlands, Pahang, Malaysia. A total of 152 landslide locations were detected by using integration of Interferometry Synthetic Aperture RADAR (InSAR) technique, Google Earth (GE) images and extensive field survey. However, 80% of the data were employed for training the machine learning algorithms and the remaining 20% for validation purposes. Seventeen triggering and conditioning factors, namely slope, aspect, elevation, distance to road, distance to river, proximity to fault, road density, river density, Normalized Difference Vegetation Index (NDVI), rainfall, land cover, lithology, soil types, curvature, profile curvature, Stream Power Index (SPI) and Topographic Wetness Index (TWI), were extracted from satellite imageries, digital elevation model (DEM), geological and soil maps. These factors were utilized to generate landslide susceptibility maps using Logistic Regression (LR) model, Logistic Model Tree (LMT), Random Forest (RF), Alternating Decision Tree (ADTree), Adaptive Boosting (AdaBoost) and a novel hybrid model from ADTree and AdaBoost models, namely AB-ADTree model. The validation was based on area under the ROC curve (AUC) and statistical measurements of Positive Predictive Value (PPV), Negative Predictive Value (NPV), sensitivity, specificity, accuracy and Root Mean Square Error (RMSE). The results showed that AUC was 90%, 92%, 88%, 59%, 96% and 94% for LR, LMT, RF, ADTree, AdaBoost and AB-ADTree algorithms, respectively. Non-parametric evaluations of the Friedman and Wilcoxon were also applied to assess the models’ performance: the findings revealed that ADTree is inferior to the other models used in this study. Using a handheld Global Positioning System (GPS), field study and validation were performed for almost 20% (30 locations) of the detected landslide locations and the results revealed that the landslide locations were correctly detected. In conclusion, this study can be applicable for hazard mitigation purposes and regional planning

    Remote Sensing of Natural Hazards

    Get PDF
    Each year, natural hazards such as earthquakes, cyclones, flooding, landslides, wildfires, avalanches, volcanic eruption, extreme temperatures, storm surges, drought, etc., result in widespread loss of life, livelihood, and critical infrastructure globally. With the unprecedented growth of the human population, largescale development activities, and changes to the natural environment, the frequency and intensity of extreme natural events and consequent impacts are expected to increase in the future.Technological interventions provide essential provisions for the prevention and mitigation of natural hazards. The data obtained through remote sensing systems with varied spatial, spectral, and temporal resolutions particularly provide prospects for furthering knowledge on spatiotemporal patterns and forecasting of natural hazards. The collection of data using earth observation systems has been valuable for alleviating the adverse effects of natural hazards, especially with their near real-time capabilities for tracking extreme natural events. Remote sensing systems from different platforms also serve as an important decision-support tool for devising response strategies, coordinating rescue operations, and making damage and loss estimations.With these in mind, this book seeks original contributions to the advanced applications of remote sensing and geographic information systems (GIS) techniques in understanding various dimensions of natural hazards through new theory, data products, and robust approaches

    Natural Disaster Application on Big Data and Machine Learning: A Review

    Get PDF
    Natural disasters are events that are difficult to avoid. There are several ways of reducing the risks of natural disasters. One of them is implementing disaster reduction programs. There are already several developed countries that apply the concept of disaster reduction. In addition to disaster reduction programs, there are several ways to predict or reducing the risks using artificial intelligence technology. One of them is big data, machine learning, and deep learning. By utilizing this method at the moment, it facilitates tasks in visualizing, analyzing, and predicting natural disaster. This research will focus on conducting a review process and understanding the purpose of machine learning and big data in the area of disaster management and natural disaster. The result of this paper is providing insight and the use of big data, machine learning, and deep learning in 6 disaster management area. This 6-disaster management area includes early warning damage, damage assessment, monitoring and detection, forecasting and predicting, and post-disaster coordination, and response, and long-term risk assessment and reduction
    corecore