42 research outputs found

    Benchmark Calculations for Perchlorate from Three Human Cohorts

    Get PDF
    The presence of low concentrations of perchlorate in some drinking water sources has led to concern regarding potential effects on the thyroid. In a recently published report, the National Academy of Sciences indicated that the perchlorate dose required to cause hypothyroidism in adults would probably be > 0.40 mg/kg-day for months or longer. In this study, we calculated benchmark doses for perchlorate from thyroid-stimulating hormone (TSH) and free thyroxine (T(4)) serum indicators from two occupational cohorts with long-term exposure to perchlorate, and from a clinical study of volunteers exposed to perchlorate for 2 weeks. The benchmark dose for a particular serum indicator was defined as the dose predicted to cause an additional 5 or 10% of persons to have a serum measurement outside of the normal range. Using the data from the clinical study, we estimated the half-life of perchlorate in serum at 7.5 hr and the volume of distribution at 0.34 L/kg. Using these estimates and measurements of perchlorate in serum or urine, doses in the occupational cohorts were estimated and used in benchmark calculations. Because none of the three studies found a significant effect of perchlorate on TSH or free T(4), all of the benchmark dose estimates were indistinguishable from infinity. The lower 95% statistical confidence limits on benchmark doses estimated from a combined analysis of the two occupational studies ranged from 0.21 to 0.56 mg/kg-day for free T(4) index and from 0.36 to 0.92 mg/kg-day for TSH. Corresponding estimates from the short-term clinical study were within these ranges

    Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?

    Get PDF
    © 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio

    Chemical carcinogenesis

    Full text link

    Benchmark dose for cadmium-induced renal effects in humans

    Get PDF
    OBJECTIVES: Our goal in this study was to explore the use of a hybrid approach to calculate benchmark doses (BMDs) and their 95% lower confidence bounds (BMDLs) for renal effects of cadmium in a population with low environmental exposure. METHODS: Morning urine and blood samples were collected from 820 Swedish women 53-64 years of age. We measured urinary cadmium (U-Cd) and tubular effect markers [N-acetyl-beta-D-glucosaminidase (NAG) and human complex-forming protein (protein HQ in 790 women and estimated glomerular filtration rate (GFR; based on serum cystatin Q in 700 women. Age, body mass index, use of nonsteroidal anti-inflammatory drugs, and blood lead levels were used as covariates for estimated GFR. BMDs/BMDLs corresponding to an additional risk (benchmark response) of 5 or 10% were calculated (the background risk at zero exposure was set to 5%). The results were compared with the estimated critical concentrations obtained by applying logistic models used in previous studies on the present data. RESULTS: For both NAG and protein HC, the BMDs (BMDLs) of U-Cd were 0.5-1.1 (0.4-0.8) mu g/L (adjusted for specific gravity of 1.015 g/mL) and 0.6-1.1 (0.5-0.8) mu g/g creatinine. For estimated GFR, the BMDs (BMDLs) were 0.8-1.3 (0.5-0.9) mu g/L adjusted for specific gravity and 1.1-1.8 (0.7-1.2) mu g/g creatinine. CONCLUSION: The obtained benchmark doses of U-Cd were lower than the critical concentrations previously reported. The critical dose level for glomerular effects was only slightly higher than that for tubular effects. We suggest that the hybrid approach is more appropriate for estimation of the critical U-Cd concentration, because the choice of cutoff values in logistic models largely influenced the obtained critical U-Cd
    corecore