113 research outputs found

    Risk Anal

    Get PDF
    Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose-response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose-response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose-response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose-response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.P30 ES006694/ES/NIEHS NIH HHS/United StatesR03 ES027394/ES/NIEHS NIH HHS/United StatesCC999999/Intramural CDC HHS/United States#R03-ES027394/U.S. National Institutes of Health/International#CCF-1740858/National Science Foundation/International2020-03-01T00:00:00Z30368842PMC64082697273vault:3216

    Statistical evaluation of toxicological bioassays - a review

    Get PDF
    The basic conclusions in almost all reports on new drug applications and in all publications in toxicology are based on statistical methods. However, serious contradictions exist in practice: designs with small samples sizes but use of asymptotic methods (i.e. constructed for larger sample sizes), statistically significant findings without biological relevance (and vice versa), proof of hazard vs. proof of safety, testing (e.g. no observed effect level) vs. estimation (e.g. benchmark dose), available statistical theory vs. related user-friendly software. In this review the biostatistical developments since about the year 2000 onwards are discussed, mainly structured for repeated-dose studies, mutagenicity, carcinogenicity, reproductive and ecotoxicological assays. A critical discussion is included on the unnecessarily conservative evaluation proposed in guidelines, the inadequate but almost always used proof of hazard approach, and the limitation of data-dependent decision-tree approaches

    Worth weighting for : studies on benchmark dose analysis in relation to animal ethics in toxicity testing

    Get PDF
    A purpose of chemical health risk assessment is to characterize the nature and size of the health risk associated with exposure to chemicals, including identification of a dose below which toxic effects are not expected or negligible. This is usually based on analysis of dose- response data from toxicity studies on animals. Traditionally the dose-response in animals has been analyzed employing the No-Observed-Adverse-Effect-Level (NOAEL) approach, but because of the several flaws of this approach it is to a greater and greater extent being replaced by the so called Benchmark Dose (BMD) approach. Previous evaluations of how to design studies in order to obtain as much information as possible from a limited number of experimental animals have revealed the importance of including high doses. However, these studies have not taken the distress of the laboratory animals, which is likely to be higher at high doses, into account. The overall aim of the present thesis was to examine how study designs, especially with dose groups of unequal size, affect the quality of BMD estimates and level of animal distress. In Paper I our computer simulations concerning the appropriateness of using nested models in BMD modelling of continuous endpoints indicate that it is problematic to calculate BMD on the basis of simpler models and that they should be used with caution in connection with risk assessment as they may result in underestimations of the true BMD. In Paper II-III our computer simulations of toxicity testing with unequal group sizes showed that better information about dose-response can be obtained with designs that also reduce the level of animal distress. In Paper IV we interviewed members of the Swedish Animal Ethics Committees concerning how the number of animals used in toxicity tests might be weight against the distress of the individual animal. Their opinions concerning whether it is preferable to use fewer animals that suffer more rather than a large number of animals that suffer a little, differed considerably between individuals. However, there were no statistically significant differences in relation to the fact that respondent were either researchers, political representatives or representatives of animal welfare organizations. In Paper V the results from Paper IV and the simulation techniques in Paper II were combined to evaluate how toxicity tests could be designed to obtain as much information as possible at a limited ethical cost, with respect to both the number of animals used and their individual distress. The most ethically efficient design depended on what constituted the ethical cost and how large that ethical cost was. In conclusion, this thesis describes the potential to use BMD-aligned study design as a mean for refinement of animal toxicity testing. In addition, new strategies for model selection and quantitative measures of ethical weights are presented

    Prediction intervals based on historical control data obtained from bioassays

    Get PDF
    Die Berechnung von Vorhersageintervallen auf derGrundlage von historischen Kontrolldaten aus Bioassays ist in vielen Bereichen der biologischen Forschung von Interesse. Bei pharmazeutischen und präklinischen Anwendungen, wie z. B. Immonogenitätstests, ist die Berechnung von Vorhersageintervallen (oder oberen Vorhersagegrenzen), die zwischen anti-drug Antikörper positiven Patienten und anti-drug Antikörper negativen Patienten unterscheiden, von Interesse. In der (Öko-)Toxikologie werden verschiedene Bioassays angewendet, um die toxikologischen Eigenschaften einer bestimmten chemischen Verbindung anModellorganismen zu untersuchen (z. B. ihre Karzinogenität oder ihre Auswirkungen auf aquatische Nahrungsketten). In diesem Forschungsbereich ist es von Interesse zu überprüfen, ob das Ergebnis der aktuellen unbehandelten Kontrolle (oder der gesamten aktuellen Studie) mit den historischen Informationen übereinstimmt. Zu diesem Zweck können Vorhersageintervalle auf der Grundlage von historischen Kontrolldaten berechnet werden. Wenn die aktuellen Beobachtungen im Vorhersageintervall liegen, kann davon ausgegangen werden, dass sie mit den historischen Informationen übereinstimmen. Das erste Kapitel dieser Arbeit gibt einen detaillierten Überblick über die Verwendung von historischen Kontrolldaten im Rahmen von biologischen Versuchen. Darüber hinaus wird ein Überblick über die Datenstruktur (dichotome Daten, Zähldaten, kontinuierliche Daten) und die Modelle, auf denen die vorgeschlagenen Vorhersageintervalle basieren, gegeben. Im Zusammenhang mit dichotomen Daten oder Zähldaten wird besonderes Augenmerk auf Überdispersion gelegt, die in Daten mit biologischem Hintergrund häufig vorkommt, in der Literatur zu Vorhersageintervallen jedoch meist nicht berücksichtigt wird. Daher wurden Vorhersageintervalle für eine zukünftige Beobachtung vorgeschlagen, die auf überdispersen Binomialdaten beruhen. Die Überdeckungswahrscheinlichkeiten dieser Intervalle wurden auf der Grundlage von Monte-Carlo-Simulationen bewertet und lagen wesentlich näher am nominellen Level als die in der Literatur gefundenen Vorhersageintervalle, die keineÜberdispersion berücksichtigen (siehe Abschnitte 2.1 und 2.2). In mehreren Anwendungen ist die abhängige Variable kontinuierlich und wird als normalverteilt angenommen. Dennoch können die Daten durch verschiedene Zufallsfaktoren (zum Beispiel unterschiedliche Labore die Proben von mehreren Patienten analysieren) beeinflusst werden. In diesem Fall können die Daten durch lineareModelle mit zufälligen Effekten modelliert werden, bei denen Parameterschätzer mittels Restricted- Maximum-Likelihood Verfahren geschätztwerden. Für dieses Szenariowerden in Abschnitt 2.3 zwei Vorhersageintervalle vorgeschlagen. Eines dieser vorgeschlagenen Intervalle basiert auf einem Bootstrap- Kalibrierungsverfahren, das es auch in Fällen anwendbar macht, in denen ein Vorhersageintervall für mehr als eine zukünftige Beobachtung benötigt wird. Abschnitt 2.4 beschreibt das R-Paket predint, in dem das in Abschnitt 2.3 beschriebene bootstrap-kalibrierte Vorhersageintervall (sowie untere und obere Vorhersagegrenzen) implementiert ist. Darüber hinaus sind Vorhersageintervalle für mindestens eine zukünftige Beobachtung für überdisperse Binomial- oder Zähldaten implementiert. Der Kern dieser Arbeit besteht in der Berechnung von Vorhersageintervallen für eine oder mehrere zukünftige Beobachtungen, die auf überdispersen Binomialdaten, überdispersen Zähldaten oder linearen Modellen mit zufälligen Effekten basieren. Nach Kenntnis des Autors ist dies das erste Mal, dass Vorhersageintervalle, die Überdispersion berücksichtigen, vorgeschlagen werden. Darüber hinaus ist "predint" das erste über CRAN verfügbare R-Paket, das Funktionen für die Anwendung von Vorhersageintervallen für die genanntenModelle bereitstellt. Somit ist die in dieser Arbeit vorgeschlageneMethodik öffentlich zugänglich und kann von anderen Forschenden leicht angewendet werden

    NIOSH practices in occupational risk assessment

    Get PDF
    cdc:85505Minor revisions were made to the frontmatter to indicate actual publication in March 2020. A new DOI number was inserted. It is as follows, https://doi.org/10.26616/NIOSHPUB2020106re- vised032020."Exposure to on-the-job health hazards is a problem faced by workers worldwide. Unlike safety hazards that may lead to injury, health hazards can lead to various types of illness. For example, exposures to some chemicals used in work processes may cause immediate sensory irritation (e.g., stinging or burning eyes, dry throat, cough); in other cases, workplace chemicals may cause cancer in workers many years after exposure. There are millions of U.S. workers exposed to chemicals in their work each year. In order to make recommendations for working safely in the presence of chemical hazards, the National Institute for Occupational Safety and Health (NIOSH) conducts risk assessments. In simple terms, risk assessment is a way of relating a hazard, like a toxic chemical in the air, to potential health risks associated with exposure to that hazard. Risk assessment allows NIOSH to make recommendations for controlling exposures in the workplace to reduce health risks. This document describes the process and logic NIOSH uses to conduct risk assessments, including the following steps: 1) Determining what type of hazard is associated with a chemical or other agent; 2) Collating the scientific evidence indicating whether the chemical or other agent causes illness or injury; 3) Evaluating the scientific data and determining how much exposure to the chemical or other agent would be harmful to workers; and 4) Carefully considering all relevant evidence to make the best, scientifically supported decisions. NIOSH researchers publish risk assessments in peer-reviewed scientific journals and in NIOSH-numbered documents. NIOSH-numbered publications also provide recommendations aimed to improve worker safety and health that stem from risk assessment." NIOSHTIC-2NIOSHTIC no. 20058814Suggested citation: NIOSH [2020]. Current intelligence bulletin 69: NIOSH practices in occupational risk assessment. By Daniels RD, Gilbert SJ, Kuppusamy SP, Kuempel ED, Park RM, Pandalai SP, Smith RJ, Wheeler MW, Whittaker C, Schulte PA. Cincinnati, OH: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health. DHHS (NIOSH) Publication No. 2020-106, (Revised 03/2020) https://doi.org/10.26616/NIOSHPUB2020106revised0320202020-106revised032020.pdf?id=10.26616/NIOSHPUB2020106202010.26616/NIOSHPUB2020106revised032020733

    NIOSH practices in occupational risk assessment

    Get PDF
    "Exposure to on-the-job health hazards is a problem faced by workers worldwide. Unlike safety hazards that may lead to injury, health hazards can lead to various types of illness. For example, exposures to some chemicals used in work processes may cause immediate sensory irritation (e.g., stinging or burning eyes, dry throat, cough); in other cases, workplace chemicals may cause cancer in workers many years after exposure. There are millions of U.S. workers exposed to chemicals in their work each year. In order to make recommendations for working safely in the presence of chemical hazards, the National Institute for Occupational Safety and Health (NIOSH) conducts risk assessments. In simple terms, risk assessment is a way of relating a hazard, like a toxic chemical in the air, to potential health risks associated with exposure to that hazard. Risk assessment allows NIOSH to make recommendations for controlling exposures in the workplace to reduce health risks. This document describes the process and logic NIOSH uses to conduct risk assessments, including the following steps: 1) Determining what type of hazard is associated with a chemical or other agent; 2) Collating the scientific evidence indicating whether the chemical or other agent causes illness or injury; 3) Evaluating the scientific data and determining how much exposure to the chemical or other agent would be harmful to workers; and 4) Carefully considering all relevant evidence to make the best, scientifically supported decisions. NIOSH researchers publish risk assessments in peer-reviewed scientific journals and in NIOSH-numbered documents. NIOSH-numbered publications also provide recommendations aimed to improve worker safety and health that stem from risk assessment." NIOSHTIC-2NIOSHTIC no. 20058767Suggested citation: NIOSH [2019]. Current intelligence bulletin 69: NIOSH practices in occupational risk assessment. By Daniels RD, Gilbert SJ, Kuppusamy SP, Kuempel ED, Park RM, Pandalai SP, Smith RJ, Wheeler MW, Whittaker C, Schulte PA. Cincinnati, OH: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health. DHHS (NIOSH) Publication No. 2020-106, https://doi.org/10.26616/NIOSHPUB20201062020-106.pdf?id=10.26616/NIOSHPUB2020106202010.26616/NIOSHPUB2020106728

    Applications of Non-Parametric Kernel Smoothing Estimators in Monte Carlo Risk Assessments

    Get PDF
    This dissertation addresses two separate issues involving the estimation of risk. The first issue regards the creation of a schedule for the viability testing of seeds stored in long-term storage facilities. The second problem pertains to the time required to simulate risk by using a two-dimensional Monte Carlo. Genebank managers conduct viability tests on stored seeds so they can replace lots that have viability near a critical threshold, such as 50 or 85 % germination. Currently, these tests are typically scheduled at uniform intervals; testing every 5 years is common. A manager needs to balance the cost of an additional test against the possibility of losing a seed lot due to late retesting. We developed a data-informed method to schedule viability tests for a collection of 2,833 maize seed lots with 3 to 7 completed viability tests per lot. Given these historical data reporting on seed viability at arbitrary times, we fit a hierarchical Bayesian seed-viability model with random seed-lot-specific coefficients. The posterior distribution of the predicted time to cross below a critical threshold was estimated for each seed lot. We recommend a predicted quantile as a retest time, chosen to balance the importance of catching quickly decaying lots against the cost of premature tests. The method can be used with any seed-viability model; we focused on two, the Avrami viability curve and a quadratic curve that accounts for seed after-ripening. After fitting both models, we found that the quadratic curve gave more plausible predictions than did the Avrami curve. Also, a receiver operating characteristic (ROC) curve analysis and a follow-up test demonstrated that a 0.05 quantile yields reasonable predictions. The two-dimensional Monte Carlo simulation is an important tool for quantitative risk assessors. Its framework easily propagates aleatoric and epistemic uncertainties related to risk. Aleatoric uncertainty concerns the inherent, irreducible variability of a risk factor. Epistemic uncertainty concerns the reducible uncertainty of a fixed risk factor. The total crop yield of a corn field is an example of an aleatoric uncertainty while the mean of corn yield is an epistemic uncertainty. The traditional application of a two-dimensional Monte Carlo simulation in a risk assessment requires many Monte Carlo samples. In a common case, a risk assessor samples 10,000 epistemic factor vectors. For each vector, the assessor generates 10,000 vectors of aleatoric factors and calculates risk. The purpose of heavy aleatoric simulation is to estimate a cumulative frequency distribution, CDF, of risk conditional on an epistemic vector. This approach has 108 calculations of risk and is computationally slow. We propose a more efficient method that reduces the number of simulations in the aleatoric dimension by pooling together risk values of epistemic vectors close to a target epistemic vector and estimate the conditional CDF using the multivariate Nadaraya-Watson estimator. We examine the risk of hemolytic uremic syndrome in young children exposed to Escherichia coli O157:H7 in frozen ground beef patties and demonstrate that our method replicates the results of the traditional two-dimensional Monte Carlo risk assessment. Furthermore, for this problem, we find that our method is three times faster than the traditional method. In order to perform the modified two-dimensional Monte Carlo simulation of risk, we must specify a bandwidth, h. In general, researchers pick an h that balances the estimator\u27s bias and variance. They minimize criteria such as average squared error (ASE), penalized ASE, or asymptotic mean integrated squared error (AMISE) to select an optimal h. A review of the optimal bandwidth selection literature related to multivariate kernel-regression estimation shows that there is still ambiguity about the best bandwidth selector. We compare the effects of five penalized-ASE bandwidth selectors and an AMISE bandwidth plug-in on the average accuracy of a multivariate Nadaraya-Watson kernel-regression estimator of a CDF of hemolytic uremic syndrome (HUS) risk in young children exposed to Escherichia coli O157:H7 in ground beef patties. We consider these six bandwidth selectors because they compute relative quickly, and researchers generally desire fast results. Simulating different amounts of data (ne = 1000, 3000, and 5000) from each of three HUS-risk models of varying complexity, we find that none of the selectors consistently results in the most accurate CDF estimator. However, if the goal is to produce accurate quantile-quantile risk assessment results (Pouillot and Delignette-Muller (2010)), then the AMISE-based selector performs best

    Towards the development of a mycoinsecticide to control white grubs (Coleoptera: Scarabaeidae) in South African sugarcane

    Get PDF
    In the KwaZulu-Natal (KZN) Midlands North region of South Africa, the importance and increased prevalence of endemic scarabaeids, particularly Hypopholis sommeri Burmeister and Schizonycha affinis Boheman (Coleoptera: Melolonthinae), as soil pests of sugarcane, and a need for their control was established. The development of a mycoinsecticide offers an environmentally friendly alternative to chemical insecticides. The identification of a diversity of white grub species, in two Scarabaeidae subfamilies, representing seven genera were collected in sugarcane as a pest complex. Hypopholis sommeri and S. affinis were the most prevalent species. The increased seasonal abundances, diversity and highly aggregated nature of these scarabaeid species in summer months, suggested that targeting and control strategies for these pests should be considered in this season. Increased rainfall, relative humidity and soil temperatures were linked to the increased occurrence of scarab adults and neonate grubs. Beauveria brongniartii (Saccardo) Petch epizootics were recorded at two sites in the KZN Midlands North on H. sommeri. Seventeen different fluorescently-labelled microsatellite PCR primers were used to target 78 isolates of Beauveria sp. DNA. Microsatellite data resolved two distinct clusters of Beauveria isolates which represented the Beauveria bassiana senso stricto (Balsamo) Vuillemin and B. brongniartii species groups. These groupings were supported by two gene regions, the nuclear ribosomal Internal Transcribed Spacer (ITS) and the nuclear B locus (Bloc) gene of which 23 exemplar Beauveria isolates were represented and sequenced. When microsatellite data were analysed, 26 haplotypes among 58 isolates of B. brongniartii were distinguished. Relatively low levels of genetic diversity were detected in B. brongniartii and isolates were shown to be closely related. There was no genetic differentiation between the two sites, Harden Heights and Canema in the KZN Midlands North. High gene flow from swarming H. sommeri beetles is the proposed mechanism for this lack of genetic differentiation between populations. Microsatellite analyses also showed that B. brongniartii conidia were being cycled from arboreal to subterranean habitats in the environment by H. sommeri beetles. This was the first record of this species of fungus causing epizootics on the larvae and adults of H. sommeri in South Africa. The virulence of 21 isolates of Beauveria brongniartii and two isolates of B. bassiana were evaluated against the adults and larvae of S. affinis and the adults of H. sommeri and Tenebrio molitor Linnaeus (Coleoptera: Tenebrionidae). Despite being closely-related, B. brongniartii isolates varied significantly in their virulence towards different hosts and highlighted the host specific nature of B. brongniartii towards S. affinis when compared to B. bassiana. Adults of S. affinis were significantly more susceptible to B. brongniartii isolates than the second (L2) or third instar (L3) grubs. The median lethal time (LT₅₀) of the most virulent B. brongniartii isolate (C13) against S. affinis adults was 7.8 days and probit analysis estimated a median lethal concentration (LC₅₀) of 4.4×10⁷ conidia/ml⁻¹. When L2 grubs were treated with a concentration of 1.0×10⁸ conidia/ml⁻¹, B. brongniartii isolates HHWG1, HHB39A and C17 caused mortality in L2 grubs within 18.4-19.8 days (LT₅₀). Beauveria brongniartii isolate HHWG1 was tested against the L3 grubs of S. affinis at four different concentrations. At the lowest concentration (1×10⁶ conidia/ml⁻¹), the LT₅₀ was 25.8 days, and at the highest concentration (1×10⁹ conidia/ml⁻¹) the LT₅₀ dropped to 15.1 days. The persistence of B. bassiana isolate 4222 formulated on rice and wheat bran and buried at eight field sites in the KZN Midlands North was evaluated by plating out a suspension of treated soil onto a selective medium. All eight field sites showed a significant decline in B. bassiana CFUs per gram of soil over time, with few conidia still present in the samples after a year. Greater declines in CFUs were observed at some sites but there were no significant differences observed in the persistence of conidia formulated on rice or wheat bran as carriers. Overall, poor persistence of B. bassiana isolate 4222 was attributed to suboptimum temperatures, rainfall, which rapidly degraded the nutritive carriers, attenuated fungal genotype and the action of antagonistic soil microbes. Growers’ perceptions of white grubs as pests and the feasibility of a mycoinsecticide market were evaluated by means of a semi-structured questionnaire. The study showed that the reduced feasibility of application, general lack of potential demand for a product, high cost factors and most importantly, the lack of pest perception, were factors which would negatively affect the adoption of a granular mycoinsecticide. Growers however exhibited a positive attitude towards mycoinsecticides, and showed all the relevant attributes for successful technology adoption. It is recommended that because B. brongniartii epizootics were recorded on target pests which indicated good host specificity, dispersal ability and persistence of the fungus in the intended environment of application; that a mycoinsecticide based on this fungal species be developed. What will likely increase adoption and success of a mycoinsecticide is collaboration between various industries partners to increases market potential in other crops such as Acacia mearnsii De Wild (Fabales: Fabaceae)
    corecore