51 research outputs found

    Revisiting Species Sensitivity Distribution : modelling species variability for the protection of communities

    Get PDF
    Species Sensitivity Distribution (SSD) is a method used by scientists and regulators from all over the world to determine the safe concentration for various contaminants stressing the environment. Although ubiquitous, this approach suffers from numerous methodological flaws, notably because it is based on incomplete use of experimental data. This thesis revisits classical SSD, attempting to overcome this shortcoming. First, we present a methodology to include censored data in SSD with a web-tool to apply it easily. Second, we propose to model all the information present in the experimental data to describe the response of a community exposed to a contaminant. To this aim, we develop a hierarchical model within a Bayesian framework. On a dataset describing the effect of pesticides on diatom growth, we illustrate how this method, accounting for variability as well as uncertainty, provides benefits to risk assessment. Third, we extend this hierarchical approach to include the temporal dimension of the community response. The objective of that development is to remove the dependence of risk assessment on the date of the last experimental observation in order to build a precise description of its time evolution and to extrapolate to longer times. This approach is build on a toxico-dynamic model and illustrated on a dataset describing the salinity tolerance of freshwater speciesLa SSD (Species Sensitivity Distribution) est une méthode utilisée par les scientifiques et les régulateurs de tous les pays pour fixer la concentration sans danger de divers contaminants sources de stress pour l'environnement. Bien que fort répandue, cette approche souffre de diverses faiblesses sur le plan méthodologique, notamment parce qu'elle repose sur une utilisation partielle des données expérimentales. Cette thèse revisite la SSD actuelle en tentant de pallier ce défaut. Dans une première partie, nous présentons une méthodologie pour la prise en compte des données censurées dans la SSD et un outil web permettant d'appliquer cette méthode simplement. Dans une deuxième partie, nous proposons de modéliser l'ensemble de l'information présente dans les données expérimentales pour décrire la réponse d'une communauté exposée à un contaminant. A cet effet, nous développons une approche hiérarchique dans un paradigme bayésien. A partir d'un jeu de données décrivant l'effet de pesticides sur la croissance de diatomées, nous montrons l'intérêt de la méthode dans le cadre de l'appréciation des risques, de par sa prise en compte de la variabilité et de l'incertitude. Dans une troisième partie, nous proposons d'étendre cette approche hiérarchique pour la prise en compte de la dimension temporelle de la réponse. L'objectif de ce développement est d'affranchir autant que possible l'appréciation des risques de sa dépendance à la date de la dernière observation afin d'arriver à une description fine de son évolution et permettre une extrapolation. Cette approche est mise en œuvre à partir d'un modèle toxico-dynamique pour décrire des données d'effet de la salinité sur la survie d'espèces d'eau douc

    Bayesian Functional Forecasting with Locally-Autoregressive Dependent Processes

    Get PDF
    International audienceMotivated by the problem of forecasting demand and offer curves, we introduce a class of nonparametric dynamic models with locally-autoregressive behaviour, and provide a full inferential strategy for forecasting time series of piecewise-constant non-decreasing functions over arbitrary time horizons. The model is induced by a non Markovian system of interacting particles whose evolution is governed by a resampling step and a drift mechanism. The former is based on a global interaction and accounts for the volatility of the functional time series, while the latter is determined by a neighbourhood-based interaction with the past curves and accounts for local trend behaviours, separating these from pure noise. We discuss the implementation of the model for functional forecasting by combining a population Monte Carlo and a semi-automatic learning approach to approximate Bayesian computation which require limited tuning. We validate the inference method with a simulation study, and carry out predictive inference on a real dataset on the Italian natural gas market

    Hierarchical modelling of species sensitivity distribution: development and application to the case of diatoms exposed to several herbicides

    Full text link
    The Species Sensitivity Distribution (SSD) is a key tool to assess the ecotoxicological threat of contaminant to biodiversity. It predicts safe concentrations for a contaminant in a community. Widely used, this approach suffers from several drawbacks: i)summarizing the sensitivity of each species by a single value entails a loss of valuable information about the other parameters characterizing the concentration-effect curves; ii)it does not propagate the uncertainty on the critical effect concentration into the SSD; iii)the hazardous concentration estimated with SSD only indicates the threat to biodiversity, without any insight about a global response of the community related to the measured endpoint. We revisited the current SSD approach to account for all the sources of variability and uncertainty into the prediction and to assess a global response for the community. For this purpose, we built a global hierarchical model including the concentration-response model together with the distribution law for the SSD. Working within a Bayesian framework, we were able to compute an SSD taking into account all the uncertainty from the original raw data. From model simulations, it is also possible to extract a quantitative indicator of a global response of the community to the contaminant. We applied this methodology to study the toxicity of 6 herbicides to benthic diatoms from Lake Geneva, measured from biomass reduction

    Approximate filtering via discrete dual processes

    Get PDF
    We consider the task of filtering a dynamic parameter evolving as a diffusion process, given data collected at discrete times from a likelihood which is conjugate to the marginal law of the diffusion, when a generic dual process on a discrete state space is available. Recently, it was shown that duality with respect to a death-like process implies that the filtering distributions are finite mixtures, making exact filtering and smoothing feasible through recursive algorithms with polynomial complexity in the number of observations. Here we provide general results for the case of duality between the diffusion and a regular jump continuous-time Markov chain on a discrete state space, which typically leads to filtering distribution given by countable mixtures indexed by the dual process state space. We investigate the performance of several approximation strategies on two hidden Markov models driven by Cox-Ingersoll-Ross and Wright-Fisher diffusions, which admit duals of birth-and-death type, and compare them with the available exact strategies based on death-type duals and with bootstrap particle filtering on the diffusion state space as a general benchmark

    Approximating the clusters' prior distribution in Bayesian nonparametric models

    Get PDF
    International audienceIn Bayesian nonparametrics, knowledge of the prior distribution induced on the number of clusters is key for prior specification and calibration. However, evaluating this prior is infamously difficult even for moderate sample size. We evaluate several statistical approximations to the prior distribution on the number of clusters for Gibbs-type processes, a class including the Pitman-Yor process and the normalized generalized gamma process. We introduce a new approximation based on the predictive distribution of Gibbs-type process, which compares favourably with the existing methods. We thoroughly discuss the limitations of these various approximations by comparing them against an exact implementation of the prior distribution of the number of clusters

    On the use of human mobility proxy for the modeling of epidemics

    Get PDF
    Human mobility is a key component of large-scale spatial-transmission models of infectious diseases. Correctly modeling and quantifying human mobility is critical for improving epidemic control policies, but may be hindered by incomplete data in some regions of the world. Here we explore the opportunity of using proxy data or models for individual mobility to describe commuting movements and predict the diffusion of infectious disease. We consider three European countries and the corresponding commuting networks at different resolution scales obtained from official census surveys, from proxy data for human mobility extracted from mobile phone call records, and from the radiation model calibrated with census data. Metapopulation models defined on the three countries and integrating the different mobility layers are compared in terms of epidemic observables. We show that commuting networks from mobile phone data well capture the empirical commuting patterns, accounting for more than 87% of the total fluxes. The distributions of commuting fluxes per link from both sources of data - mobile phones and census - are similar and highly correlated, however a systematic overestimation of commuting traffic in the mobile phone data is observed. This leads to epidemics that spread faster than on census commuting networks, however preserving the order of infection of newly infected locations. Match in the epidemic invasion pattern is sensitive to initial conditions: the radiation model shows higher accuracy with respect to mobile phone data when the seed is central in the network, while the mobile phone proxy performs better for epidemics seeded in peripheral locations. Results suggest that different proxies can be used to approximate commuting patterns across different resolution scales in spatial epidemic simulations, in light of the desired accuracy in the epidemic outcome under study.Comment: Accepted fro publication in PLOS Computational Biology. Abstract shortened to fit Arxiv limits. 35 pages, 6 figure

    La SSD revisitée : modéliser la variabilité des espèces pour protéger les communautés

    No full text
    La SSD (Species Sensitivity Distribution) est une méthode utilisée par les scientifiques et les régulateurs de tous les pays pour fixer la concentration sans danger de divers contaminants sources de stress pour l'environnement. Bien que fort répandue, cette approche souffre de diverses faiblesses sur le plan méthodologique, notamment parce qu'elle repose sur une utilisation partielle des données expérimentales. Cette thèse revisite la SSD actuelle en tentant de pallier ce défaut. Dans une première partie, nous présentons une méthodologie pour la prise en compte des données censurées dans la SSD et un outil web permettant d'appliquer cette méthode simplement. Dans une deuxième partie, nous proposons de modéliser l'ensemble de l'information présente dans les données expérimentales pour décrire la réponse d'une communauté exposée à un contaminant. A cet effet, nous développons une approche hiérarchique dans un paradigme bayésien. A partir d'un jeu de données décrivant l'effet de pesticides sur la croissance de diatomées, nous montrons l'intérêt de la méthode dans le cadre de l'appréciation des risques, de par sa prise en compte de la variabilité et de l'incertitude. Dans une troisième partie, nous proposons d'étendre cette approche hiérarchique pour la prise en compte de la dimension temporelle de la réponse. L'objectif de ce développement est d'affranchir autant que possible l'appréciation des risques de sa dépendance à la date de la dernière observation afin d'arriver à une description fine de son évolution et permettre une extrapolation. Cette approche est mise en œuvre à partir d'un modèle toxico-dynamique pour décrire des données d'effet de la salinité sur la survie d'espèces d'eau douceSpecies Sensitivity Distribution (SSD) is a method used by scientists and regulators from all over the world to determine the safe concentration for various contaminants stressing the environment. Although ubiquitous, this approach suffers from numerous methodological flaws, notably because it is based on incomplete use of experimental data. This thesis revisits classical SSD, attempting to overcome this shortcoming. First, we present a methodology to include censored data in SSD with a web-tool to apply it easily. Second, we propose to model all the information present in the experimental data to describe the response of a community exposed to a contaminant. To this aim, we develop a hierarchical model within a Bayesian framework. On a dataset describing the effect of pesticides on diatom growth, we illustrate how this method, accounting for variability as well as uncertainty, provides benefits to risk assessment. Third, we extend this hierarchical approach to include the temporal dimension of the community response. The objective of that development is to remove the dependence of risk assessment on the date of the last experimental observation in order to build a precise description of its time evolution and to extrapolate to longer times. This approach is build on a toxico-dynamic model and illustrated on a dataset describing the salinity tolerance of freshwater specie
    corecore