56,018 research outputs found
Vulnerability assessments of pesticide leaching to groundwater
Pesticides may have adverse environmental effects if they are transported to groundwater and surface waters. The vulnerability of water resources to contamination of pesticides must therefore be evaluated. Different stakeholders, with different objectives and requirements, are interested in such vulnerability assessments. Various assessment methods have been developed in the past. For example, the vulnerability of groundwater to pesticide leaching may be evaluated by indices and overlay-based methods, by statistical analyses of monitoring data, or by using process-based models of pesticide fate. No single tool or methodology is likely to be appropriate for all end-users and stakeholders, since their suitability depends on the available data and the specific goals of the assessment. The overall purpose of this thesis was to develop tools, based on different process-based models of pesticide leaching that may be used in groundwater vulnerability assessments. Four different tools have been developed for end-users with varying goals and interests: (i) a tool based on the attenuation factor implemented in a GIS, where vulnerability maps are generated for the islands of Hawaii (U.S.A.), (ii) a simulation tool based on the MACRO model developed to support decision-makers at local authorities to assess potential risks of leaching of pesticides to groundwater following normal usage in drinking water abstraction districts, (iii) linked models of the soil root zone and groundwater to investigate leaching of the pesticide mecoprop to shallow and deep groundwater in fractured till, and (iv) a meta-model of the pesticide fate model MACRO developed for 'worst-case' groundwater vulnerability assessments in southern Sweden. The strengths and weaknesses of the different approaches are discussed
Integrating and Ranking Uncertain Scientific Data
Mediator-based data integration systems resolve exploratory queries by joining data elements across sources. In the presence of uncertainties, such multiple expansions can quickly lead to spurious connections and incorrect results. The BioRank project investigates formalisms for modeling uncertainty during scientific data integration and for ranking uncertain query results. Our motivating application is protein function prediction. In this paper we show that: (i) explicit modeling of uncertainties as probabilities increases our ability to predict less-known or previously unknown functions (though it does not improve predicting the well-known). This suggests that probabilistic uncertainty models offer utility for scientific knowledge discovery; (ii) small perturbations in the input probabilities tend to produce only minor changes in the quality of our result rankings. This suggests that our methods are robust against slight variations in the way uncertainties are transformed into probabilities; and (iii) several techniques allow us to evaluate our probabilistic rankings efficiently. This suggests that probabilistic query evaluation is not as hard for real-world problems as theory indicates
Quantifying uncertainty in pest risk maps and assessments : adopting a risk-averse decision maker’s perspective
Pest risk maps are important decision support tools when devising strategies to minimize introductions of invasive organisms and mitigate their impacts. When possible management responses to an invader include costly or socially sensitive activities, decision-makers tend to follow a more certain (i.e., risk-averse) course of action. We presented a new mapping technique that assesses pest invasion risk from the perspective of a risk-averse decision maker. We demonstrated the method by evaluating the likelihood that an invasive forest pest will be transported to one of the U.S. states or Canadian provinces in infested firewood by visitors to U.S. federal campgrounds. We tested the impact of the risk aversion assumption using distributions of plausible pest arrival scenarios generated with a geographically explicit model developed from data documenting camper travel across the study area. Next, we prioritized regions of high and low pest arrival risk via application of two stochastic ordering techniques that employed, respectively, first- and second-degree stochastic dominance rules, the latter of which incorporated the notion of risk aversion. We then identified regions in the study area where the pest risk value changed considerably after incorporating risk aversion. While both methods identified similar areas of highest and lowest risk, they differed in how they demarcated moderate-risk areas. In general, the second-order stochastic dominance method assigned lower risk rankings to moderate-risk areas. Overall, this new method offers a better strategy to deal with the uncertainty typically associated with risk assessments and provides a tractable way to incorporate decisionmaking preferences into final risk estimates, and thus helps to better align these estimates with particular decision-making scenarios about a pest organism of concern. Incorporation of risk aversion also helps prioritize the set of locations to target for inspections and outreach activities, which can be costly. Our results are especially important and useful given the huge number of camping trips that occur each year in the United States and Canada
Recommended from our members
Uncertainty explicit assessment of off-the-shelf software: A Bayesian approach
Assessment of software COTS components is an essential part of component-based software development. Poorly chosen components may lead to solutions of low quality and that are difficult to maintain. The assessment may be based on incomplete knowledge about the COTS component itself and other aspects (e.g. vendor’s credentials, etc.), which may affect the decision of selecting COTS component(s). We argue in favor of assessment methods in which uncertainty is explicitly represented (‘uncertainty explicit’ methods) using probability distributions. We provide details of a Bayesian model, which can be used to capture the uncertainties in the simultaneous assessment of two attributes, thus, also capturing the dependencies that might exist between them. We also provide empirical data from the use of this method for the assessment of off-the-shelf database servers which illustrate the advantages of ‘uncertainty explicit’ methods over conventional methods of COTS component assessment which assume that at the end of the assessment the values of the attributes become known with certainty
Recommended from our members
VarSight: prioritizing clinically reported variants with binary classification algorithms.
BackgroundWhen applying genomic medicine to a rare disease patient, the primary goal is to identify one or more genomic variants that may explain the patient's phenotypes. Typically, this is done through annotation, filtering, and then prioritization of variants for manual curation. However, prioritization of variants in rare disease patients remains a challenging task due to the high degree of variability in phenotype presentation and molecular source of disease. Thus, methods that can identify and/or prioritize variants to be clinically reported in the presence of such variability are of critical importance.MethodsWe tested the application of classification algorithms that ingest variant annotations along with phenotype information for predicting whether a variant will ultimately be clinically reported and returned to a patient. To test the classifiers, we performed a retrospective study on variants that were clinically reported to 237 patients in the Undiagnosed Diseases Network.ResultsWe treated the classifiers as variant prioritization systems and compared them to four variant prioritization algorithms and two single-measure controls. We showed that the trained classifiers outperformed all other tested methods with the best classifiers ranking 72% of all reported variants and 94% of reported pathogenic variants in the top 20.ConclusionsWe demonstrated how freely available binary classification algorithms can be used to prioritize variants even in the presence of real-world variability. Furthermore, these classifiers outperformed all other tested methods, suggesting that they may be well suited for working with real rare disease patient datasets
Quantifying uncertainty in pest risk maps and assessments : adopting a risk-averse decision maker’s perspective
Pest risk maps are important decision support tools when devising strategies to minimize introductions of invasive organisms and mitigate their impacts. When possible management responses to an invader include costly or socially sensitive activities, decision-makers tend to follow a more certain (i.e., risk-averse) course of action. We presented a new mapping technique that assesses pest invasion risk from the perspective of a risk-averse decision maker. We demonstrated the method by evaluating the likelihood that an invasive forest pest will be transported to one of the U.S. states or Canadian provinces in infested firewood by visitors to U.S. federal campgrounds. We tested the impact of the risk aversion assumption using distributions of plausible pest arrival scenarios generated with a geographically explicit model developed from data documenting camper travel across the study area. Next, we prioritized regions of high and low pest arrival risk via application of two stochastic ordering techniques that employed, respectively, first- and second-degree stochastic dominance rules, the latter of which incorporated the notion of risk aversion. We then identified regions in the study area where the pest risk value changed considerably after incorporating risk aversion. While both methods identified similar areas of highest and lowest risk, they differed in how they demarcated moderate-risk areas. In general, the second-order stochastic dominance method assigned lower risk rankings to moderate-risk areas. Overall, this new method offers a better strategy to deal with the uncertainty typically associated with risk assessments and provides a tractable way to incorporate decisionmaking preferences into final risk estimates, and thus helps to better align these estimates with particular decision-making scenarios about a pest organism of concern. Incorporation of risk aversion also helps prioritize the set of locations to target for inspections and outreach activities, which can be costly. Our results are especially important and useful given the huge number of camping trips that occur each year in the United States and Canada
Indexing Metric Spaces for Exact Similarity Search
With the continued digitalization of societal processes, we are seeing an
explosion in available data. This is referred to as big data. In a research
setting, three aspects of the data are often viewed as the main sources of
challenges when attempting to enable value creation from big data: volume,
velocity and variety. Many studies address volume or velocity, while much fewer
studies concern the variety. Metric space is ideal for addressing variety
because it can accommodate any type of data as long as its associated distance
notion satisfies the triangle inequality. To accelerate search in metric space,
a collection of indexing techniques for metric data have been proposed.
However, existing surveys each offers only a narrow coverage, and no
comprehensive empirical study of those techniques exists. We offer a survey of
all the existing metric indexes that can support exact similarity search, by i)
summarizing all the existing partitioning, pruning and validation techniques
used for metric indexes, ii) providing the time and storage complexity analysis
on the index construction, and iii) report on a comprehensive empirical
comparison of their similarity query processing performance. Here, empirical
comparisons are used to evaluate the index performance during search as it is
hard to see the complexity analysis differences on the similarity query
processing and the query performance depends on the pruning and validation
abilities related to the data distribution. This article aims at revealing
different strengths and weaknesses of different indexing techniques in order to
offer guidance on selecting an appropriate indexing technique for a given
setting, and directing the future research for metric indexes
- …