56,018 research outputs found

    Vulnerability assessments of pesticide leaching to groundwater

    Get PDF
    Pesticides may have adverse environmental effects if they are transported to groundwater and surface waters. The vulnerability of water resources to contamination of pesticides must therefore be evaluated. Different stakeholders, with different objectives and requirements, are interested in such vulnerability assessments. Various assessment methods have been developed in the past. For example, the vulnerability of groundwater to pesticide leaching may be evaluated by indices and overlay-based methods, by statistical analyses of monitoring data, or by using process-based models of pesticide fate. No single tool or methodology is likely to be appropriate for all end-users and stakeholders, since their suitability depends on the available data and the specific goals of the assessment. The overall purpose of this thesis was to develop tools, based on different process-based models of pesticide leaching that may be used in groundwater vulnerability assessments. Four different tools have been developed for end-users with varying goals and interests: (i) a tool based on the attenuation factor implemented in a GIS, where vulnerability maps are generated for the islands of Hawaii (U.S.A.), (ii) a simulation tool based on the MACRO model developed to support decision-makers at local authorities to assess potential risks of leaching of pesticides to groundwater following normal usage in drinking water abstraction districts, (iii) linked models of the soil root zone and groundwater to investigate leaching of the pesticide mecoprop to shallow and deep groundwater in fractured till, and (iv) a meta-model of the pesticide fate model MACRO developed for 'worst-case' groundwater vulnerability assessments in southern Sweden. The strengths and weaknesses of the different approaches are discussed

    Integrating and Ranking Uncertain Scientific Data

    Get PDF
    Mediator-based data integration systems resolve exploratory queries by joining data elements across sources. In the presence of uncertainties, such multiple expansions can quickly lead to spurious connections and incorrect results. The BioRank project investigates formalisms for modeling uncertainty during scientific data integration and for ranking uncertain query results. Our motivating application is protein function prediction. In this paper we show that: (i) explicit modeling of uncertainties as probabilities increases our ability to predict less-known or previously unknown functions (though it does not improve predicting the well-known). This suggests that probabilistic uncertainty models offer utility for scientific knowledge discovery; (ii) small perturbations in the input probabilities tend to produce only minor changes in the quality of our result rankings. This suggests that our methods are robust against slight variations in the way uncertainties are transformed into probabilities; and (iii) several techniques allow us to evaluate our probabilistic rankings efficiently. This suggests that probabilistic query evaluation is not as hard for real-world problems as theory indicates

    Quantifying uncertainty in pest risk maps and assessments : adopting a risk-averse decision maker’s perspective

    Get PDF
    Pest risk maps are important decision support tools when devising strategies to minimize introductions of invasive organisms and mitigate their impacts. When possible management responses to an invader include costly or socially sensitive activities, decision-makers tend to follow a more certain (i.e., risk-averse) course of action. We presented a new mapping technique that assesses pest invasion risk from the perspective of a risk-averse decision maker. We demonstrated the method by evaluating the likelihood that an invasive forest pest will be transported to one of the U.S. states or Canadian provinces in infested firewood by visitors to U.S. federal campgrounds. We tested the impact of the risk aversion assumption using distributions of plausible pest arrival scenarios generated with a geographically explicit model developed from data documenting camper travel across the study area. Next, we prioritized regions of high and low pest arrival risk via application of two stochastic ordering techniques that employed, respectively, first- and second-degree stochastic dominance rules, the latter of which incorporated the notion of risk aversion. We then identified regions in the study area where the pest risk value changed considerably after incorporating risk aversion. While both methods identified similar areas of highest and lowest risk, they differed in how they demarcated moderate-risk areas. In general, the second-order stochastic dominance method assigned lower risk rankings to moderate-risk areas. Overall, this new method offers a better strategy to deal with the uncertainty typically associated with risk assessments and provides a tractable way to incorporate decisionmaking preferences into final risk estimates, and thus helps to better align these estimates with particular decision-making scenarios about a pest organism of concern. Incorporation of risk aversion also helps prioritize the set of locations to target for inspections and outreach activities, which can be costly. Our results are especially important and useful given the huge number of camping trips that occur each year in the United States and Canada

    Quantifying uncertainty in pest risk maps and assessments : adopting a risk-averse decision maker’s perspective

    Get PDF
    Pest risk maps are important decision support tools when devising strategies to minimize introductions of invasive organisms and mitigate their impacts. When possible management responses to an invader include costly or socially sensitive activities, decision-makers tend to follow a more certain (i.e., risk-averse) course of action. We presented a new mapping technique that assesses pest invasion risk from the perspective of a risk-averse decision maker. We demonstrated the method by evaluating the likelihood that an invasive forest pest will be transported to one of the U.S. states or Canadian provinces in infested firewood by visitors to U.S. federal campgrounds. We tested the impact of the risk aversion assumption using distributions of plausible pest arrival scenarios generated with a geographically explicit model developed from data documenting camper travel across the study area. Next, we prioritized regions of high and low pest arrival risk via application of two stochastic ordering techniques that employed, respectively, first- and second-degree stochastic dominance rules, the latter of which incorporated the notion of risk aversion. We then identified regions in the study area where the pest risk value changed considerably after incorporating risk aversion. While both methods identified similar areas of highest and lowest risk, they differed in how they demarcated moderate-risk areas. In general, the second-order stochastic dominance method assigned lower risk rankings to moderate-risk areas. Overall, this new method offers a better strategy to deal with the uncertainty typically associated with risk assessments and provides a tractable way to incorporate decisionmaking preferences into final risk estimates, and thus helps to better align these estimates with particular decision-making scenarios about a pest organism of concern. Incorporation of risk aversion also helps prioritize the set of locations to target for inspections and outreach activities, which can be costly. Our results are especially important and useful given the huge number of camping trips that occur each year in the United States and Canada

    Indexing Metric Spaces for Exact Similarity Search

    Full text link
    With the continued digitalization of societal processes, we are seeing an explosion in available data. This is referred to as big data. In a research setting, three aspects of the data are often viewed as the main sources of challenges when attempting to enable value creation from big data: volume, velocity and variety. Many studies address volume or velocity, while much fewer studies concern the variety. Metric space is ideal for addressing variety because it can accommodate any type of data as long as its associated distance notion satisfies the triangle inequality. To accelerate search in metric space, a collection of indexing techniques for metric data have been proposed. However, existing surveys each offers only a narrow coverage, and no comprehensive empirical study of those techniques exists. We offer a survey of all the existing metric indexes that can support exact similarity search, by i) summarizing all the existing partitioning, pruning and validation techniques used for metric indexes, ii) providing the time and storage complexity analysis on the index construction, and iii) report on a comprehensive empirical comparison of their similarity query processing performance. Here, empirical comparisons are used to evaluate the index performance during search as it is hard to see the complexity analysis differences on the similarity query processing and the query performance depends on the pruning and validation abilities related to the data distribution. This article aims at revealing different strengths and weaknesses of different indexing techniques in order to offer guidance on selecting an appropriate indexing technique for a given setting, and directing the future research for metric indexes
    • …
    corecore