670 research outputs found

    Introducing a moving time window in the analogue method for precipitation prediction to find better analogue situations at a sub-daily time step

    Get PDF
    Analogue methods (AMs) predict local weather variables (predictands), such as precipitation, by means of a statistical relationship with predictors at a synoptic scale. Predictors are extracted from reanalysis datasets that often have a six hourly time step. For precipitation forecasts, the predictand often consists of daily precipitation (06h to 30h UTC), given the length of their available archives, and the unavailability of equivalent archives at a finer time step. The optimal predictors to explain these daily precipitations have been obtained in a calibration procedure with fixed times of observation (e.g. geopotential heigths Z1000 at 12h UTC and Z500 at 24h UTC). In operational forecast, a new target situation is defined by its geopotential predictors at these fixed hours, i.e. Z1000 at 12h UTC and Z500 at 24h UTC. Usually, the search for candidate situations for this given target day is usually undertaken by comparing the state of the atmosphere at the same fixed hours of the day for both the target day and the candidate analogues. However, it can be expected that the best analogy among the past synoptic situations does not occur systematically at the same time of the day and that better candidates can be found by shifting to a different hour. With this assumption, a moving time window (MTW) was introduced to allow the search for candidates at different hours of the day (e.g. Z1000 at 00, 06, 12, 18 h UTC and Z500 at 12, 18, 24, 30 h UTC respectively). This MTW technique can only result in a better analogy in terms of the atmospheric circulation (compared to the method with fixed hours), with improved values of the analogy criterion on the entire distribution of analogue dates. A seasonal effect has also been identified, with larger improvements in winter than in summer. However, its interest in precipitation forecast can only be evaluated with an archive of the corresponding 24h-totals, i.e. not only 6-30h UTC totals, but also 0-24h, 12-12h and 18-18h totals). This was possible to assess on a set of stations from the Swiss hourly measurement network with rather long time-series. The prediction skill was found to have improved by the MTW, and even to a greater extent after recalibrating the AM parameters. Moreover, the improvement was greater for days with heavy precipitation, which are generally related to more dynamic atmospheric situations where timing is more specific. The use of the MTW in the AM can be considered for several applications in different contexts, may it be for operational forecasting or climate-related studies

    Design of a geodetic database and associated tools for monitoring rock-slope movements: the example of the top of Randa rockfall scar

    No full text
    International audienceThe need for monitoring slope movements increases with the increasing need for new areas to inhabit and new land management requirements. Rock-slope monitoring implies the use of a database, but also the use of other tools to facilitate the analysis of movements. The experience and the philosophy of monitoring the top of the Randa rockfall scar which is sliding down into the valley near Randa village in Switzerland are presented. The database includes data correction tools, display facilities and information about benchmarks. Tools for analysing the movement acceleration and spatial changes and forecasting movement are also presented. Using the database and its tools it was possible to discriminate errors from critical slope movement. This demonstrates the efficiency of these tools in monitoring the Randa scar

    Toward community predictions : Multi-scale modelling of mountain breeding birds' habitat suitability, landscape preferences, and environmental drivers

    Get PDF
    Across a large mountain area of the western Swiss Alps, we used occurrence data (presence-only points) of bird species to find suitable modelling solutions and build reliable distribution maps to deal with biodiversity and conservation necessities of bird species at finer scales. We have performed a multi-scale method of modelling, which uses distance, climatic, and focal variables at different scales (neighboring window sizes), to estimate the efficient scale of each environmental predictor and enhance our knowledge on how birds interact with their complex environment. To identify the best radius for each focal variable and the most efficient impact scale of each predictor, we have fitted univariate models per species. In the last step, the final set of variables were subsequently employed to build ensemble of small models (ESMs) at a fine spatial resolution of 100 m and generate species distribution maps as tools of conservation. We could build useful habitat suitability models for the three groups of species in the national red list. Our results indicate that, in general, the most important variables were in the group of bioclimatic variables including "Bio11" (Mean Temperature of Coldest Quarter), and "Bio 4" (Temperature Seasonality), then in the focal variables including "Forest", "Orchard", and "Agriculture area" as potential foraging, feeding and nesting sites. Our distribution maps are useful for identifying the most threatened species and their habitat and also for improving conservation effort to locate bird hotspots. It is a powerful strategy to improve the ecological understanding of the distribution of bird species in a dynamic heterogeneous environment.Peer reviewe

    Toward community predictions: Multi‐scale modelling of mountain breeding birds' habitat suitability, landscape preferences, and environmental drivers

    Get PDF
    Across a large mountain area of the western Swiss Alps, we used occurrence data (presence‐only points) of bird species to find suitable modeling solutions and build reliable distribution maps to deal with biodiversity and conservation necessities of bird species at finer scales. We have performed a multi‐scale method of modeling, which uses distance, climatic, and focal variables at different scales (neighboring window sizes), to estimate the efficient scale of each environmental predictor and enhance our knowledge on how birds interact with their complex environment. To identify the best radius for each focal variable and the most efficient impact scale of each predictor, we have fitted univariate models per species. In the last step, the final set of variables were subsequently employed to build an ensemble of small models (ESMs) at a fine spatial resolution of 100 m and generate species distribution maps as tools of conservation. We could build useful habitat suitability models for the three groups of species in the national red list. Our results indicate that, in general, the most important variables were in the group of bioclimatic variables including “Bio11” (Mean Temperature of Coldest Quarter), and “Bio 4” (Temperature Seasonality), then in the focal variables including “Forest”, “Orchard”, and “Agriculture area” as potential foraging, feeding and nesting sites. Our distribution maps are useful for identifying the most threatened species and their habitat and also for improving conservation effort to locate bird hotspots. It is a powerful strategy to improve the ecological understanding of the distribution of bird species in a dynamic heterogeneous environment

    A data-integration approach to correct sampling bias in species distribution models using multiple datasets of breeding birds in the Swiss Alps

    Get PDF
    It is essential to accurately model species distributions and biodiversity in response to many ecological and conservation challenges. The primary means of reliable decision-making on conservation priority are the data on the distributions and abundance of species. However, finding data that is accurate and reliable for predicting species distribution could be challenging. Data could come from different sources, with different designs, coverage, and potential sampling biases. In this study, we examined the emerging methods of modelling species distribution that integrate data from multiple sources such as systematic or standardized and casual or occasional surveys. We applied two modelling approaches, “data-pooling” and “ model-based data integration” that each involves combining various datasets to measure environmental interactions and clarify the distribution of species. Our paper demonstrates a reliable data integration workflow that includes gathering information on model-based data integration, creating a sub-model of each dataset independently, and finally, combining it into a single final model. We have shown that this is a more reliable way of developing a model than a data pooling strategy that combines multiple data sources to fit a single model. Moreover, data integration approaches could improve the poor predictive performance of systematic small datasets, through model-based data integration techniques that enhance the predictive accuracy of Species Distribution Models. We also identified, consistent with previous research, that machine learning algorithms are the most accurate techniques to predict bird species distribution in our heterogeneous study area in the western Swiss Alps. In particular, tree-dependent ensembles of Random Forest (RF) contribute to a better understanding of the interactions between species and the environment

    DSE

    Get PDF
    Disponible en Github: https://github.com/adririquelme/DSEDiscontinuity Set Extractor (DSE) is programmed by Adrián Riquelme for testing part of his PdD studies. Its aim is to extract discontinuity sets from a rock mass. The input data is a 3D point cloud, which can be acquired by means of a 3D laser scanner (LiDAR or TLS), digital photogrammetry techniques (such as SfM) or synthetic data. It applies a proposed methodology to semi-automatically identify points members of an unorganised 3D point cloud that are arranged in 3D space by planes

    Automatic and global optimization of the Analogue Method for statistical downscaling of precipitation - Which parameters can be determined by Genetic Algorithms?

    Get PDF
    The Analogue Method (AM) aims at forecasting a local meteorological variable of interest (the predictand), often the daily precipitation total, on the basis of a statistical relationship with synoptic predictor variables. A certain number of similar situations are sampled in order to establish the empirical conditional distribution which is considered as the prediction for a given date. The method is used in operational medium-range forecasting in several hydropower companies or flood forecasting services, as well as in climate impact studies. The statistical relationship is usually established by means of a semi-automatic sequential procedure that has strong limitations: it is made of successive steps and thus cannot handle parameters dependencies, and it cannot automatically optimize certain parameters, such as the selection of the pressure levels and the temporal windows on which the predictors are compared. A global optimization technique based on Genetic Algorithms was introduced in order to surpass these limitations and to provide a fully automatic and objective determination of the AM parameters. The parameters that were previously assessed manually, such as the selection of the pressure levels and the temporal windows, on which the predictors are compared, are now automatically determined. The next question is: Are Genetic Algorithms able to select the meteorological variable, in a reanalysis dataset, that is the best predictor for the considered predictand, along with the analogy criteria itself? Even though we may not find better predictors for precipitation prediction that the ones often used in Europe, due to numerous other studies which consisted in systematic assessments, the ability of an automatic selection offers new perspectives in order to adapt the AM for new predictands or new regions under different meteorological influences

    Spatial pattern of landslides in Swiss Rhone valley

    Get PDF
    The present study analyses the spatial pattern of quaternary gravitational slope deformations (GSD) and historical/present-day instabilities (HPI) inventoried in the Swiss Rhone Valley. The main objective is to test if these events are clustered (spatial attraction) or randomly distributed (spatial independency). Moreover, analogies with the cluster behaviour of earthquakes inventoried in the same area were examined. The Ripley's K-function was applied to measure and test for randomness. This indicator allows describing the spatial pattern of a point process at increasing distance values. To account for the non-constant intensity of the geological phenomena, a modification of the K-function for inhomogeneous point processes was adopted. The specific goal is to explore the spatial attraction (i.e. cluster behaviour) among landslide events and between gravitational slope deformations and earthquakes. To discover if the two classes of instabilities (GSD and HPI) are spatially independently distributed, the cross K-function was computed. The results show that all the geological events under study are spatially clustered at a well-defined distance range. GSD and HPI show a similar pattern distribution with clusters in the range 0.75?9 km. The cross K-function reveals an attraction between the two classes of instabilities in the range 0?4 km confirming that HPI are more prone to occur within large-scale slope deformations. The K-function computed for GSD and earthquakes indicates that both present a cluster tendency in the range 0?10 km, suggesting that earthquakes could represent a potential predisposing factor which could influence the GSD distribution
    corecore