89 research outputs found

    Long lead time drought forecasting using lagged climate variables and a stacked long short-term memory model.

    Full text link
    Drought forecasting with a long lead time is essential for early warning systems and risk management strategies. The use of machine learning algorithms has been proven to be beneficial in forecasting droughts. However, forecasting at long lead times remains a challenge due to the effects of climate change and the complexities involved in drought assessment. The rise of deep learning techniques can solve this issue, and the present work aims to use a stacked long short-term memory (LSTM) architecture to forecast a commonly used drought measure, namely, the Standard Precipitation Evaporation Index. The model was then applied to the New South Wales region of Australia, with hydrometeorological and climatic variables as predictors. The multivariate interpolated grid of the Climatic Research Unit was used to compute the index at monthly scales, with meteorological variables as predictors. The architecture was trained using data from the period of 1901-2000 and tested on data from the period of 2001-2018. The results were then forecasted at lead times ranging from 1 month to 12 months. The forecasted results were analysed in terms of drought characteristics, such as drought intensity, drought onset, spatial extent and number of drought months, to elucidate how these characteristics improve the understanding of drought forecasting. The drought intensity forecasting capability of the model used two statistical metrics, namely, the coefficient of determination (R2) and root-mean-square error. The variation in the number of drought months was examined using the threat score technique. The results of this study showed that the stacked LSTM model can forecast effectively at short-term and long-term lead times. Such findings will be essential for government agencies and can be further tested to understand the forecasting capability of the presented architecture at shorter temporal scales, which can range from days to weeks

    Do Termitaria Indicate the Presence of Groundwater? A Case Study of Hydrogeophysical Investigation on a Land Parcel with Termite Activity.

    Full text link
    Termite nests have long been suggested to be good indicators of groundwater but only a few studies are available to demonstrate the relationship between the two. This study therefore aims at investigating the most favourable spots for locating groundwater structures on a small parcel of land with conspicuous termite activity. To achieve this, geophysical soundings using the renowned vertical electrical sounding (VES) technique was carried out on the gridded study area. A total of nine VESs with one at the foot of a termitarium were conducted. The VES results were interpreted and assessed via two different techniques: (1) physical evaluation as performed by drillers in the field and (2) integration of primary and secondary geoelectrical parameters in a geographic information system (GIS). The result of the physical evaluation indicated a clear case of subjectivity in the interpretation but was consistent with the choice of VES points 1 and 6 (termitarium location) as being the most prospective points to be considered for drilling. Similarly, the integration of the geoelectrical parameters led to the mapping of the most prospective groundwater portion of the study area with the termitarium chiefly in the center of the most suitable region. This shows that termitaria are valuable landscape features that can be employed as biomarkers in the search of groundwater

    Systematic sample subdividing strategy for training landslide susceptibility models

    Full text link
    © 2019 Elsevier B.V. Current practice in choosing training samples for landslide susceptibility modelling (LSM) is to randomly subdivide inventory information into training and testing samples. Where inventory data differ in distribution, the selection of training samples by a random process may cause inefficient training of machine learning (ML)/statistical models. A systematic technique may, however, produce efficient training samples that well represent the entire inventory data. This is particularly true when inventory information is scarce. This research proposed a systemic strategy to deal with this problem based on the fundamental distribution of probabilities (i.e. Hellinger) and a novel graphical representation of information contained in inventory data (i.e. inventory information curve, IIC). This graphical representation illustrates the relative increase in available information with the growth of the training sample size. Experiments on a selected dataset over the Cameron Highlands, Malaysia were conducted to validate the proposed methods. The dataset contained 104 landslide inventories and 7 landslide-conditioning factors (i.e. altitude, slope, aspect, land use, distance from the stream, distance from the road and distance from lineament) derived from a LiDAR-based digital elevation model and thematic maps acquired from government authorities. In addition, three ML/statistical models, namely, k-nearest neighbour (KNN), support vector machine (SVM) and decision tree (DT), were utilised to assess the proposed sampling strategy for LSM. The impacts of model's hyperparameters, noise and outliers on the performance of the models and the shape of IICs were also investigated and discussed. To evaluate the proposed method further, it was compared with other standard methods such as random sampling (RS), stratified RS (SRS) and cross-validation (CV). The evaluations were based on the area under the receiving characteristic curves. The results show that IICs are useful in explaining the information content in the training subset and their differences from the original inventory datasets. The quantitative evaluation with KNN, SVM and DT shows that the proposed method outperforms the RS and SRS in all the models and the CV method in KNN and DT models. The proposed sampling strategy enables new applications in landslide modelling, such as measuring inventory data content and complexity and selecting effective training samples to improve the predictive capability of landslide susceptibility models

    IoT-Based Geotechnical Monitoring of Unstable Slopes for Landslide Early Warning in the Darjeeling Himalayas.

    Get PDF
    In hilly areas across the world, landslides have been an increasing menace, causing loss of lives and properties. The damages instigated by landslides in the recent past call for attention from authorities for disaster risk reduction measures. Development of an effective landslide early warning system (LEWS) is an important risk reduction approach by which the authorities and public in general can be presaged about future landslide events. The Indian Himalayas are among the most landslide-prone areas in the world, and attempts have been made to determine the rainfall thresholds for possible occurrence of landslides in the region. The established thresholds proved to be effective in predicting most of the landslide events and the major drawback observed is the increased number of false alarms. For an LEWS to be successfully operational, it is obligatory to reduce the number of false alarms using physical monitoring. Therefore, to improve the efficiency of the LEWS and to make the thresholds serviceable, the slopes are monitored using a sensor network. In this study, micro-electro-mechanical systems (MEMS)-based tilt sensors and volumetric water content sensors were used to monitor the active slopes in Chibo, in the Darjeeling Himalayas. The Internet of Things (IoT)-based network uses wireless modules for communication between individual sensors to the data logger and from the data logger to an internet database. The slopes are on the banks of mountain rivulets (jhoras) known as the sinking zones of Kalimpong. The locality is highly affected by surface displacements in the monsoon season due to incessant rains and improper drainage. Real-time field monitoring for the study area is being conducted for the first time to evaluate the applicability of tilt sensors in the region. The sensors are embedded within the soil to measure the tilting angles and moisture content at shallow depths. The slopes were monitored continuously during three monsoon seasons (2017-2019), and the data from the sensors were compared with the field observations and rainfall data for the evaluation. The relationship between change in tilt rate, volumetric water content, and rainfall are explored in the study, and the records prove the significance of considering long-term rainfall conditions rather than immediate rainfall events in developing rainfall thresholds for the region

    Irrigation water allocation at farm level based on temporal cultivation-related data using meta-heuristic optimisation algorithms

    Full text link
    © 2019 by the authors. The present water crisis necessitates a frugal water management strategy. Deficit irrigation can be regarded as an efficient strategy for agricultural water management. Optimal allocation of water to agricultural farms is a computationally complex problem because of many factors, including limitations and constraints related to irrigation, numerous allocation states, and non-linearity and complexity of the objective function. Meta-heuristic algorithms are typically used to solve complex problems. The main objective of this study is to represent water allocation at farm level using temporal cultivation data as an optimisation problem, solve this problem using various meta-heuristic algorithms, and compare the results. The objective of the optimisation is to maximise the total income of all considered lands. The criteria of objective function value, convergence trend, robustness, runtime, and complexity of use and modelling are used to compare the algorithms. Finally, the algorithms are ranked using the technique for order of preference by similarity to ideal solution (TOPSIS). The income resulting from the allocation of water by the imperialist competitive algorithm (ICA) was 1.006, 1.084, and 1.098 times that of particle swarm optimisation (PSO), bees algorithm (BA), and genetic algorithm (GA), respectively. The ICA and PSO were superior to the other algorithms in most evaluations. According to the results of TOPSIS, the algorithms, by order of priority, are ICA PSO, BA, and GA. In addition, the experience showed that using meta-heuristic algorithms, such as ICA, results in higher income (4.747 times) and improved management of water deficit than the commonly used area-based water allocation method

    Susceptibility to seismic amplification and earthquake probability estimation using recurrent neural network (RNN) Model in Odisha, India

    Full text link
    © 2020 by the authors. The eastern region of India, including the coastal state of Odisha, is a moderately seismic-prone area under seismic zones II and III. However, no major studies have been conducted on earthquake probability (EPA) and hazard assessment (EHA) in Odisha. This paper had two main objectives: (1) to assess the susceptibility of seismic wave amplification (SSA) and (2) to estimate EPA in Odisha. In total, 12 indicators were employed to assess the SSA and EPA. Firstly, using the historical earthquake catalog, the peak ground acceleration (PGA) and intensity variation was observed for the Indian subcontinent. We identified high amplitude and frequency locations for estimated PGA and the periodograms were plotted. Secondly, several indicators such as slope, elevation, curvature, and amplification values of rocks were used to generate SSA using predefined weights of layers. Thirdly, 10 indicators were implemented in a developed recurrent neural network (RNN) model to create an earthquake probability map (EPM). According to the results, recent to quaternary unconsolidated sedimentary rocks and alluvial deposits have great potential to amplify earthquake intensity and consequently lead to acute ground motion. High intensity was observed in coastal and central parts of the state. Complicated morphometric structures along with high intensity variation could be other parameters that influence deposits in the Mahanadi River and its delta with high potential. The RNN model was employed to create a probability map (EPM) for the state. Results show that the Mahanadi basin has dominant structural control on earthquakes that could be found in the western parts of the state. Major faults were pointed towards a direction of WNW-ESE, NE-SW, and NNW-SSE, which may lead to isoseismic patterns. Results also show that the western part is highly probable for events while the eastern coastal part is highly susceptible to seismic amplification. The RNN model achieved an accuracy of 0.94, precision (0.94), recall (0.97), F1 score (0.96), critical success index (CSI) (0.92), and a Fowlkes-Mallows index (FM) (0.95)

    Geospatial modelling of watershed peak flood discharge in Selangor, Malaysia

    Get PDF
    © 2019 by the authors. Conservative peak flood discharge estimation methods such as the rational method do not take into account the soil infiltration of the precipitation, thus leading to inaccurate estimations of peak discharges during storm events. The accuracy of estimated peak flood discharge is crucial in designing a drainage system that has the capacity to channel runoffs during a storm event, especially cloudbursts and in the analysis of flood prevention and mitigation. The aim of this study was to model the peak flood discharges of each sub-watershed in Selangor using a geographic information system (GIS). The geospatial modelling integrated the watershed terrain model, the developed Soil Conservation Service Curve Cumber (SCS-CN) and precipitation to develop an equation for estimation of peak flood discharge. Hydrological Engineering Center-Hydrological Modeling System (HEC-HMS) was used again to simulate the rainfall-runoffbased on the Clark-unit hydrograph to validate the modelled estimation of peak flood discharge. The estimated peak flood discharge showed a coefficient of determination, r2 of 0.9445, when compared with the runoffsimulation of the Clark-unit hydrograph. Both the results of the geospatial modelling and the developed equation suggest that the peak flood discharge of a sub-watershed during a storm event has a positive relationship with the watershed area, precipitation and Curve Number (CN), which takes into account the soil bulk density and land-use of the studied area, Selangor in Malaysia. The findings of the study present a comparable and holistic approach to the estimation of peak flood discharge in a watershed which can be in the absence of a hydrodynamic simulation model

    Short-term spatio-temporal drought forecasting using random forests model at New South Wales, Australia

    Full text link
    © 2020 by the authors. Droughts can cause significant damage to agriculture and water resources, leading to severe economic losses and loss of life. One of the most important aspect is to develop effective tools to forecast drought events that could be helpful in mitigation strategies. The understanding of droughts has become more challenging because of the effect of climate change, urbanization and water management; therefore, the present study aims to forecast droughts by determining an appropriate index and analyzing its changes, using climate variables. The work was conducted in three different phases, first being the determination of Standard Precipitation Evaporation Index (SPEI), using global climatic dataset of Climate Research Unit (CRU) from 1901-2018. The indices are calculated at different monthly intervals which could depict short-term or long-term changes, and the index value represents different drought classes, ranging from extremely dry to extremely wet. However, the present study was focused only on forecasting at short-term scales for New SouthWales (NSW) region of Australia and was conducted at two different time scales, one month and three months. The second phase involved dividing the data into three sample sizes, training (1901-2010), testing (2011-2015) and validation (2016-2018). Finally, a machine learning approach, Random Forest (RF), was used to train and test the data, using various climatic variables, e.g., rainfall, potential evapotranspiration, cloud cover, vapor pressure and temperature (maximum, minimum and mean). The final phase was to analyze the performance of the model based on statistical metrics and drought classes. Regarding this, the performance of the testing period was conducted by using statistical metrics, Coefficient of Determination (R2) and Root-Mean-Square-Error (RMSE) method. The performance of the model showed a considerably higher value of R2 for both the time scales. However, statistical metrics analyzes the variation between the predicted and observed index values, and it does not consider the drought classes. Therefore, the variation in predicted and observed SPEI values were analyzed based on different drought classes, which were validated by using the Receiver Operating Characteristic (ROC)-based Area under the Curve (AUC) approach. The results reveal that the classification of drought classes during the validation period had an AUC of 0.82 for SPEI 1 case and 0.84 for SPEI 3 case. The study depicts that the Random Forest model can perform both regression and classification analysis for drought studies in NSW. The work also suggests that the performance of any model for drought forecasting should not be limited only through statistical metrics, but also by examining the variation in terms of drought characteristics

    Temporal hydrological drought index forecasting for New South Wales, Australia using machine learning approaches

    Full text link
    © 2020 by the authors. Droughts can cause significant damage to agriculture and water resources leading to severe economic losses. One of the most important aspects of drought management is to develop useful tools to forecast drought events, which could be helpful in mitigation strategies. The recent global trends in drought events reveal that climate change would be a dominant factor in influencing such events. The present study aims to understand this effect for the New South Wales (NSW) region of Australia, which has suffered from several droughts in recent decades. The understanding of the drought is usually carried out using a drought index, therefore the Standard Precipitation Evaporation Index (SPEI) was chosen as it uses both rainfall and temperature parameters in its calculation and has proven to better reflect drought. The drought index was calculated at various time scales (1, 3, 6, and 12 months) using a Climate Research Unit (CRU) dataset. The study focused on predicting the temporal aspect of the drought index using 13 different variables, of which eight were climatic drivers and sea surface temperature indices, and the remainder were various meteorological variables. The models used for forecasting were an artificial neural network (ANN) and support vector regression (SVR). The model was trained from 1901-2010 and tested for nine years (2011-2018), using three different performance metric scores (coecient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE). The results indicate that ANN was better than SVR in predicting temporal drought trends, with the highest R2 value of 0.86 for the former compared to 0.75 for the latter. The study also reveals that sea surface temperatures and the climatic index (Pacific Decadal Oscillation) do not have a significant effect on the temporal drought aspect. The present work can be considered as a first step, wherein we only study the temporal trends, towards the use of climatological variables and drought incidences for the NSW region

    Pathways and challenges of the application of artificial intelligence to geohazards modelling

    Full text link
    © 2020 International Association for Gondwana Research The application of artificial intelligence (AI) and machine learning in geohazard modelling has been rapidly growing in recent years, a trend that is observed in several research and application areas thanks to recent advances in AI. As a result, the increasing dependence on data driven studies has made its practical applications towards geohazards (landslides, debris flows, earthquakes, droughts, floods, glacier studies) an interesting prospect. These aforementioned geohazards were responsible for roughly 80% of the economic loss in the past two decades caused by all natural hazards. The present study analyses the various domains of geohazards which have benefited from classical machine learning approaches and highlights the future course of direction in this field. The emergence of deep learning has fulfilled several gaps in: i) classification; ii) seasonal forecasting as well as forecasting at longer lead times; iii) temporal based change detection. Apart from the usual challenges of dataset availability, climate change and anthropogenic activities, this review paper emphasizes that the future studies should focus on consecutive events along with integration of physical models. The recent catastrophe in Japan and Australia makes a compelling argument to focus towards consecutive events. The availability of higher temporal resolution and multi-hazard dataset will prove to be essential, but the key would be to integrate it with physical models which would improve our understanding of the mechanism involved both in single and consecutive hazard scenario. Geohazards would eventually be a data problem, like geosciences, and therefore it is essential to develop models that would be capable of handling large voluminous data. The future works should also revolve towards interpretable models with the hope of providing a reasonable explanation of the results, thereby achieving the ultimate goal of Explainable AI
    • …
    corecore