272 research outputs found

    DSE

    Get PDF
    Disponible en Github: https://github.com/adririquelme/DSEDiscontinuity Set Extractor (DSE) is programmed by Adrián Riquelme for testing part of his PdD studies. Its aim is to extract discontinuity sets from a rock mass. The input data is a 3D point cloud, which can be acquired by means of a 3D laser scanner (LiDAR or TLS), digital photogrammetry techniques (such as SfM) or synthetic data. It applies a proposed methodology to semi-automatically identify points members of an unorganised 3D point cloud that are arranged in 3D space by planes

    Control of landslide retrogression by discontinuities: evidences by the integration of airborne- and ground-based geophysical information

    No full text
    International audienceThe objective of this work is to present a multi-technique approach to define the geometry, the kinematics and the failure mechanism of a retrogressive large landslide (upper part of the La Valette landslide, South French Alps) by the combination of airborne (ALS) and terrestrial (TLS) laser scanning data and ground-based seismic tomography data. The advantage of combining different methods is to constrain the geometrical and failure mechanism models by integrating different source of information. Because of an important point density at the ground surface (4. 1 pt.m-224 ), a small laser footprint (0.09 m) and an accurate 3D positioning (0.07 m), ALS data are adapted source of information to analyze morphological structures at the surface. velocities) may highlight the presence of low seismic-velocity presence of dense fracture networks at the sub-surface. The surface displacements measured from TLS data over a period of two years (May 2008-May 2010) allow one to quantify the landslide activity at the direct vicinity of the identified discontinuities. An important subsidence of the crown area with an average subsidence rate of 3.07 m.year-1 is determined. The displacement directions indicate that the retrogression is controlled structurally by the pre-existing discontinuities. A conceptual structural model is proposed to explain the failure mechanism and the retrogressive evolution of the main scarp. Uphill, the crown area is affected by planar sliding included in a deeper wedge failure system constrained by two pre-existing fractures. Downhill, the landslide body acts as a buttress for the upper part. Consequently, the progression of the landslide body downhill allows the development of dip-slope failures and coherent blocks start sliding along planar discontinuities. The volume of the failed mass in the crown area is estimated at 500,000 m3 with the Sloping Local Base Level method

    A new approach for semi-automatic rock mass joints recognition from 3D point clouds

    Get PDF
    Rock mass characterization requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in Light Detection and Ranging (LiDAR) instrumentation currently allow quick and accurate 3D data acquisition, yielding on the development of new methodologies for the automatic characterization of rock mass discontinuities. This paper presents a methodology for the identification and analysis of flat surfaces outcropping in a rocky slope using the 3D data obtained with LiDAR. This method identifies and defines the algebraic equations of the different planes of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test, finding principal orientations by Kernel Density Estimation and identifying clusters by the Density-Based Scan Algorithm with Noise. Different sources of information —synthetic and 3D scanned data— were employed, performing a complete sensitivity analysis of the parameters in order to identify the optimal value of the variables of the proposed method. In addition, raw source files and obtained results are freely provided in order to allow to a more straightforward method comparison aiming to a more reproducible research.This work was partially funded by the University of Alicante (vigrob-157, uausti11–11, and gre09–40 projects), the Swiss National Science Foundation (FNS-138015 and FNS-144040 projects) and by the Generalitat Valenciana (project GV/2011/044)

    Use of LIDAR in landslide investigations: a review

    Get PDF
    This paper presents a short history of the appraisal of laser scanner technologies in geosciences used for imaging relief by high-resolution digital elevation models (HRDEMs) or 3D models. A general overview of light detection and ranging (LIDAR) techniques applied to landslides is given, followed by a review of different applications of LIDAR for landslide, rockfall and debris-flow. These applications are classified as: (1) Detection and characterization of mass movements; (2) Hazard assessment and susceptibility mapping; (3) Modelling; (4) Monitoring. This review emphasizes how LIDAR-derived HRDEMs can be used to investigate any type of landslides. It is clear that such HRDEMs are not yet a common tool for landslides investigations, but this technique has opened new domains of applications that still have to be develope

    Introducing a moving time window in the analogue method for precipitation prediction to find better analogue situations at a sub-daily time step

    Get PDF
    Analogue methods (AMs) predict local weather variables (predictands), such as precipitation, by means of a statistical relationship with predictors at a synoptic scale. Predictors are extracted from reanalysis datasets that often have a six hourly time step. For precipitation forecasts, the predictand often consists of daily precipitation (06h to 30h UTC), given the length of their available archives, and the unavailability of equivalent archives at a finer time step. The optimal predictors to explain these daily precipitations have been obtained in a calibration procedure with fixed times of observation (e.g. geopotential heigths Z1000 at 12h UTC and Z500 at 24h UTC). In operational forecast, a new target situation is defined by its geopotential predictors at these fixed hours, i.e. Z1000 at 12h UTC and Z500 at 24h UTC. Usually, the search for candidate situations for this given target day is usually undertaken by comparing the state of the atmosphere at the same fixed hours of the day for both the target day and the candidate analogues. However, it can be expected that the best analogy among the past synoptic situations does not occur systematically at the same time of the day and that better candidates can be found by shifting to a different hour. With this assumption, a moving time window (MTW) was introduced to allow the search for candidates at different hours of the day (e.g. Z1000 at 00, 06, 12, 18 h UTC and Z500 at 12, 18, 24, 30 h UTC respectively). This MTW technique can only result in a better analogy in terms of the atmospheric circulation (compared to the method with fixed hours), with improved values of the analogy criterion on the entire distribution of analogue dates. A seasonal effect has also been identified, with larger improvements in winter than in summer. However, its interest in precipitation forecast can only be evaluated with an archive of the corresponding 24h-totals, i.e. not only 6-30h UTC totals, but also 0-24h, 12-12h and 18-18h totals). This was possible to assess on a set of stations from the Swiss hourly measurement network with rather long time-series. The prediction skill was found to have improved by the MTW, and even to a greater extent after recalibrating the AM parameters. Moreover, the improvement was greater for days with heavy precipitation, which are generally related to more dynamic atmospheric situations where timing is more specific. The use of the MTW in the AM can be considered for several applications in different contexts, may it be for operational forecasting or climate-related studies

    Toward community predictions : Multi-scale modelling of mountain breeding birds' habitat suitability, landscape preferences, and environmental drivers

    Get PDF
    Across a large mountain area of the western Swiss Alps, we used occurrence data (presence-only points) of bird species to find suitable modelling solutions and build reliable distribution maps to deal with biodiversity and conservation necessities of bird species at finer scales. We have performed a multi-scale method of modelling, which uses distance, climatic, and focal variables at different scales (neighboring window sizes), to estimate the efficient scale of each environmental predictor and enhance our knowledge on how birds interact with their complex environment. To identify the best radius for each focal variable and the most efficient impact scale of each predictor, we have fitted univariate models per species. In the last step, the final set of variables were subsequently employed to build ensemble of small models (ESMs) at a fine spatial resolution of 100 m and generate species distribution maps as tools of conservation. We could build useful habitat suitability models for the three groups of species in the national red list. Our results indicate that, in general, the most important variables were in the group of bioclimatic variables including "Bio11" (Mean Temperature of Coldest Quarter), and "Bio 4" (Temperature Seasonality), then in the focal variables including "Forest", "Orchard", and "Agriculture area" as potential foraging, feeding and nesting sites. Our distribution maps are useful for identifying the most threatened species and their habitat and also for improving conservation effort to locate bird hotspots. It is a powerful strategy to improve the ecological understanding of the distribution of bird species in a dynamic heterogeneous environment.Peer reviewe

    A data-integration approach to correct sampling bias in species distribution models using multiple datasets of breeding birds in the Swiss Alps

    Get PDF
    It is essential to accurately model species distributions and biodiversity in response to many ecological and conservation challenges. The primary means of reliable decision-making on conservation priority are the data on the distributions and abundance of species. However, finding data that is accurate and reliable for predicting species distribution could be challenging. Data could come from different sources, with different designs, coverage, and potential sampling biases. In this study, we examined the emerging methods of modelling species distribution that integrate data from multiple sources such as systematic or standardized and casual or occasional surveys. We applied two modelling approaches, “data-pooling” and “ model-based data integration” that each involves combining various datasets to measure environmental interactions and clarify the distribution of species. Our paper demonstrates a reliable data integration workflow that includes gathering information on model-based data integration, creating a sub-model of each dataset independently, and finally, combining it into a single final model. We have shown that this is a more reliable way of developing a model than a data pooling strategy that combines multiple data sources to fit a single model. Moreover, data integration approaches could improve the poor predictive performance of systematic small datasets, through model-based data integration techniques that enhance the predictive accuracy of Species Distribution Models. We also identified, consistent with previous research, that machine learning algorithms are the most accurate techniques to predict bird species distribution in our heterogeneous study area in the western Swiss Alps. In particular, tree-dependent ensembles of Random Forest (RF) contribute to a better understanding of the interactions between species and the environment

    Toward community predictions: Multi‐scale modelling of mountain breeding birds' habitat suitability, landscape preferences, and environmental drivers

    Get PDF
    Across a large mountain area of the western Swiss Alps, we used occurrence data (presence‐only points) of bird species to find suitable modeling solutions and build reliable distribution maps to deal with biodiversity and conservation necessities of bird species at finer scales. We have performed a multi‐scale method of modeling, which uses distance, climatic, and focal variables at different scales (neighboring window sizes), to estimate the efficient scale of each environmental predictor and enhance our knowledge on how birds interact with their complex environment. To identify the best radius for each focal variable and the most efficient impact scale of each predictor, we have fitted univariate models per species. In the last step, the final set of variables were subsequently employed to build an ensemble of small models (ESMs) at a fine spatial resolution of 100 m and generate species distribution maps as tools of conservation. We could build useful habitat suitability models for the three groups of species in the national red list. Our results indicate that, in general, the most important variables were in the group of bioclimatic variables including “Bio11” (Mean Temperature of Coldest Quarter), and “Bio 4” (Temperature Seasonality), then in the focal variables including “Forest”, “Orchard”, and “Agriculture area” as potential foraging, feeding and nesting sites. Our distribution maps are useful for identifying the most threatened species and their habitat and also for improving conservation effort to locate bird hotspots. It is a powerful strategy to improve the ecological understanding of the distribution of bird species in a dynamic heterogeneous environment

    Automatic and global optimization of the Analogue Method for statistical downscaling of precipitation - Which parameters can be determined by Genetic Algorithms?

    Get PDF
    The Analogue Method (AM) aims at forecasting a local meteorological variable of interest (the predictand), often the daily precipitation total, on the basis of a statistical relationship with synoptic predictor variables. A certain number of similar situations are sampled in order to establish the empirical conditional distribution which is considered as the prediction for a given date. The method is used in operational medium-range forecasting in several hydropower companies or flood forecasting services, as well as in climate impact studies. The statistical relationship is usually established by means of a semi-automatic sequential procedure that has strong limitations: it is made of successive steps and thus cannot handle parameters dependencies, and it cannot automatically optimize certain parameters, such as the selection of the pressure levels and the temporal windows on which the predictors are compared. A global optimization technique based on Genetic Algorithms was introduced in order to surpass these limitations and to provide a fully automatic and objective determination of the AM parameters. The parameters that were previously assessed manually, such as the selection of the pressure levels and the temporal windows, on which the predictors are compared, are now automatically determined. The next question is: Are Genetic Algorithms able to select the meteorological variable, in a reanalysis dataset, that is the best predictor for the considered predictand, along with the analogy criteria itself? Even though we may not find better predictors for precipitation prediction that the ones often used in Europe, due to numerous other studies which consisted in systematic assessments, the ability of an automatic selection offers new perspectives in order to adapt the AM for new predictands or new regions under different meteorological influences
    corecore